Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Cygni posted:

Only TSMC/AMD know those numbers now, I bet.

That's hilariously proprietary info, if that leaked or was released, it would tell Intel and all the foundry competitors a shitload about AMDs tech and TSMCs process technology. They might give some roundabout throw-away numbers, but any kind of detail will be 'lose your job, and then the lawsuit' tier proprietary.

Adbot
ADBOT LOVES YOU

Mr.Radar
Nov 5, 2005

You guys aren't going to believe this, but that guy is our games teacher.

iospace posted:

Would it be worth shelling out the extra 35 or so bucks for a 2200G over an Athlon 220G?

What are your use-cases? If it's primarily going to be a single-tasking office, web-browsing, or HTPC system then I'd say probably no. If it's going to do any kind of multi-tasking or 3D-intensive application (games) then absolutely yes.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

To others running Linux on ASRock boards, heads up:

I've updated to the most recent BIOS on 2 boards now (Fatality B450 MITX, and the A300 STX mini desktop), and in both cases the my network interfaces have been re-enumerated.

The B450 board's wireless interface moved from wlp36s0 to wlp7s0; the A300's wireless moved from wlp46s0 to wlp2s0.

Not the worst thing in the world, but i spent several minutes wondering how the gently caress a BIOS update broke my internet.

iospace
Jan 19, 2038


Mr.Radar posted:

What are your use-cases? If it's primarily going to be a single-tasking office, web-browsing, or HTPC system then I'd say probably no. If it's going to do any kind of multi-tasking or 3D-intensive application (games) then absolutely yes.

Probably going to be a capture PC if I do go with the upgrade.

buglord
Jul 31, 2010

Cheating at a raffle? I sentence you to 1 year in jail! No! Two years! Three! Four! Five years! Ah! Ah! Ah! Ah!

Buglord

Cygni posted:

The parts in the stack are all price engineered anyway. You aren't actually paying that price difference from top to bottom of the stack because of any real differences in production cost (with the exception of the 2 die parts obvi). You are paying that price difference because AMD thinks it can get you to pay it. The bill of materials for a 3800X will be something like $40 with the HSF and box, if that. But AMD needs to recoup those development costs, and you do that by charging a premium on the high end.

If they think there is demand for a 4/4 part, they will sell it. Sure, they will probably reuse some salvaged dies if they can, but the majority will be fully functional dies they disable. The Deneb 2 and 3 core parts are a great example of that. That said, by spinning off the IO, they have increased the likelihood that defects actually impact a CPU trace and nothing else, so it is possible the salvage rate is higher with Zen2 than prior parts. Only TSMC/AMD know those numbers now, i bet.


This stuff is pretty fascinating to me. Like how the better parts of a wafer are used for higher end parts and the less-good chunks are put in lower end parts. Are there any videos about how modern processors are made and go through all this? How do they get those giant wafers, why is the quality variable, why are the dudes in white suits in some impossibly sterilized room, what makes a processor high end or low end, how did they fit billions of transistors into something when one was gigantic in the 1950s? Basically a more detailed How Its Made - Processors.

Rusty
Sep 28, 2001
Dinosaur Gum
I think it was someone in this thread actually that recommended this talk, but it's pretty good and even this engineer describes the process as "inseparable from magic". Also there are a ton of trade secrets he has to talk around, it's pretty interesting.

https://www.youtube.com/watch?v=NGFhc8R_uO4&t=2729s

redeyes
Sep 14, 2002

by Fluffdaddy
And just think, Intel did not improve much of anything since that lecture. lol

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

BeastOfExmoor posted:

Weird, I would've sworn I'd seen it reported as 8+4 at Computex, but Anandtech seems to agree with you.

6+6 lets you pick out the best 6 cores from each chiplet (or rather, the best 3 cores from each CCX).

Not that I'm an expert or anything, but the binning strategy is obviously immensely more complex than "the best x% of chips become Epyc" like Reddit seems to think. It's probably not even a greedy strategy at all, apart from the handful of chips that are actually broken and need to be binned down (it is a minority, something like 70% of chips are fully functional even on 7nm). For example, Epyc is locked clocks, shipping the top silicon as Epyc is pointless if it's better than the clock/voltage needed, so that might be better off shipped as Threadripper actually (where it can be overclocked). And leakage is not actually that big a problem since high-leakage chips usually clock better. It's at least a combinatorial optimization problem and I wouldn't be surprised if they actually calculated it out for each wafer and just attempted to maximize profit within quotas (meeting order quantities) and certain guidelines (try to ship x% as Epyc, etc).

Using the Epyc IO die as the chipset is wild though.

Ian Cutress and Wendell did a video where they're just talking about some of the possibilities that opens up and Ian is really jazzed about it.

tfw Wendell isn't even the smartest guy in the room.

Paul MaudDib fucked around with this message at 01:58 on Jun 14, 2019

SwissArmyDruid
Feb 14, 2014

by sebmojo

The unheralded brilliance of AMD's chiplet approach did not dawn upon me until Cutress noted that the first people to hit a new process node are the mobile chip makers, who then sort out all the problems with the new process through sheer yield, and then AMD comes along saying, "hey, we need a job done on that new process with high perf libraries," but whose chiplets are still around the same size as that which TSMC is already making.

God, I want AMD to get on Samsung fabs so goddamn bad, I've got a boner just thinking about the potential results.

Dramicus
Mar 26, 2010
Grimey Drawer

iospace posted:

Would it be worth shelling out the extra 35 or so bucks for a 2200G over an Athlon 220G?

I think the main benefit is the more powerful integrated graphics. I don't think there's a massive advantage the 4/4 has over 2/4 in daily use. So if you plan to do any gaming at all, I guess you want the 2200g as it could maybe handle 30 fps on low settings.

Klyith
Aug 3, 2007

GBS Pledge Week

iospace posted:

Probably going to be a capture PC if I do go with the upgrade.

As a video capture box, it's probably be worth throwing an extra $30 at. Definitely worth $30 if your planned capture card doesn't have an onboard encoder.

Sub Rosa
Jun 9, 2010




64 core Threadripper by the end of the year?

https://wccftech.com/exclusive-amd-is-working-on-a-monster-64-core-threadripper-landing-as-early-as-q4-2019/

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'll pay up to 1300bux for a 32C Zen2 Threadripper.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


If AMD prices these in any way aggressively I’m going to be building a ton of Threadripper workstations at work next year. It would be an incredible compute density upgrade for them

Khorne
May 1, 2002

Paul MaudDib posted:

Not that I'm an expert or anything, but the binning strategy is obviously immensely more complex than "the best x% of chips become Epyc" like Reddit seems to think.
It's very complicated and different from what reddit thinks. A good desktop bin is generally a bad mobile bin. And servers sit somewhere on a spectrum.

NewFatMike
Jun 11, 2015

Combat Pretzel posted:

I'll pay up to 1300bux for a 32C Zen2 Threadripper.

:same:

SwissArmyDruid
Feb 14, 2014

by sebmojo

Please, Dr. Su. I can't take any more excitement.

FlapYoJacks
Feb 12, 2009

SwissArmyDruid posted:

Please, Dr. Su. I can't take any more excitement.

My penis can only get so erect!

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Wake me up when I can get four 64-core chips on the same motherboard.

Progressive JPEG
Feb 19, 2003

With 64 cores/128 threads in a single socket I imagine you'd end up hitting weird scaling issues with the rest of the board/system and end up with the cores mostly idle anyway?

FlapYoJacks
Feb 12, 2009

Progressive JPEG posted:

With 64 cores/128 threads in a single socket I imagine you'd end up hitting weird scaling issues with the rest of the board/system and end up with the cores mostly idle anyway?

Not on my machine! I routinely hit all 48 threads!

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Progressive JPEG posted:

With 64 cores/128 threads in a single socket I imagine you'd end up hitting weird scaling issues with the rest of the board/system and end up with the cores mostly idle anyway?

If it was quad channel it would be unbalanced but I have seen some rumors saying that it might have the full eight channels, basically just consumer Rome.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

If it was quad channel it would be unbalanced but I have seen some rumors saying that it might have the full eight channels, basically just consumer Rome.

probably can't do that without socket changes, maybe in an embedded form factor for server boards but why would AMD do that when they could sell you Rome instead?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

MaxxBot posted:

If it was quad channel it would be unbalanced but I have seen some rumors saying that it might have the full eight channels, basically just consumer Rome.
Unbalanced how? Bandwidth starved, yes, but not unbalanced. All memory accesses go through a central IO die.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Yeah I just meant bandwidth starved, I guess they might make such a chip anyways since it wouldn't matter for some workloads.

Paul MaudDib posted:

probably can't do that without socket changes, maybe in an embedded form factor for server boards but why would AMD do that when they could sell you Rome instead?

TR3 and SP3 are physically identical there's just an ID pin that makes it so SP3 CPUs won't work in a TR4 socket. I think it would be possible not sure if they actually would do it though.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
48 cores and 4 channel would be more or less idea for a lot of workloads, 64/4 might be a bit bandwidth light for some, but that really depends on what all you have soaking up that CPU time. That and having 144 MB of cache for the chip is frankly hilarious.

ZobarStyl
Oct 24, 2005

This isn't a war, it's a moider.

SwissArmyDruid posted:

The unheralded brilliance of AMD's chiplet approach did not dawn upon me until Cutress noted that the first people to hit a new process node are the mobile chip makers, who then sort out all the problems with the new process through sheer yield, and then AMD comes along saying, "hey, we need a job done on that new process with high perf libraries," but whose chiplets are still around the same size as that which TSMC is already making.

God, I want AMD to get on Samsung fabs so goddamn bad, I've got a boner just thinking about the potential results.
I look back to how I reacted when AMD and GoFlo split, thinking 'what the hell kinda CPU design company doesn't even own its own fabs?' How naive that thought seems now, with fabless designers and pure-play foundries being the future for everyone but the biggest players. Intel has historically said that the cost of building out every new process node/fab functionally was betting the entire company. I have to wonder when that becomes prohibitive to the point that you see Intel designs fabbed on even tinier Samsung nodes as well.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010

ratbert90 posted:

Not on my machine! I routinely hit all 48 threads!

I've got a use case with some people demanding all our SQL data in an excel spreadsheet. So yeah, gimme cores.

FlapYoJacks
Feb 12, 2009

incoherent posted:

I've got a use case with some people demanding all our SQL data in an excel spreadsheet. So yeah, gimme cores.

I run multiple VMs and also routinely compile Buildroot and Yocto images. I could max out as many cores as you give me.

Klyith
Aug 3, 2007

GBS Pledge Week

ZobarStyl posted:

I look back to how I reacted when AMD and GoFlo split, thinking 'what the hell kinda CPU design company doesn't even own its own fabs?' How naive that thought seems now, with fabless designers and pure-play foundries being the future for everyone but the biggest players. Intel has historically said that the cost of building out every new process node/fab functionally was betting the entire company. I have to wonder when that becomes prohibitive to the point that you see Intel designs fabbed on even tinier Samsung nodes as well.

I feel like it's an ok business plan for a company that's selling as much silicon as intel to own their own fabs.

What was dumb and super arrogant was them saying "gently caress all y'all, we're doing our own process generation! 10nm, we don't care what everybody else is standardizing on!"

Laslow
Jul 18, 2007
It’s not like Intel doesn’t have the cash to license process tech for use in their own fabs.

Can anyone more knowledgeable in the workings of the semiconductor industry explain the business or technical reasons why they can’t or won’t?

I know this is the AMD thread and all, and I don’t mean to derail, I’m just really curious.

iospace
Jan 19, 2038


Laslow posted:

It’s not like Intel doesn’t have the cash to license process tech for use in their own fabs.

Can anyone more knowledgeable in the workings of the semiconductor industry explain the business or technical reasons why they can’t or won’t?

I know this is the AMD thread and all, and I don’t mean to derail, I’m just really curious.

I think the biggest thing is, for the longest time, Intel was the best. Why license the process out when you can use your own, better process. There's a reason the 2500k is a meme. Everything from that era that came out of Intel's plants, even with mitigations applied, took a giant poo poo on AMD's contemporary offerings. AMD, under Dr. Su's leadership, has really put the pedal down.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Laslow posted:

It’s not like Intel doesn’t have the cash to license process tech for use in their own fabs.

Can anyone more knowledgeable in the workings of the semiconductor industry explain the business or technical reasons why they can’t or won’t?

I know this is the AMD thread and all, and I don’t mean to derail, I’m just really curious.

I'm pretty sure poo poo would have to fail even worse than it is currently for Intel to look elsewhere, 10nm might be a total flop but 7nm might still be good. I think it would take total failure of both 10nm and 7nm for them to consider looking elsewhere.

PC LOAD LETTER
May 23, 2005
WTF?!

MaxxBot posted:

I think it would take total failure of both 10nm and 7nm for them to consider looking elsewhere.

If Intel's 7nm turns out to be as big of a shitshow as their 10nm has been then I think they'll be pretty much forced to look elsewhere for high performance parts at a minimum. They won't really have a choice anymore.

So far though there hasn't been anything solid to suggest their 7nm will be that bad. Just hints and rumors that it isn't going to be as good as advertised or on time despite the current Intel PR amounting to "everythings fine guys".

SwissArmyDruid
Feb 14, 2014

by sebmojo
https://twitter.com/barronsonline/status/1139719100635070464

Fabulousity
Dec 29, 2008

Number One I order you to take a number two.

I just had a random thought that is maybe happening in a parallel universe somewhere: Donald Trump is somehow CEO of Intel and on the eve of Ryzen 3x's release he throws a temper tantrum and revokes AMD's x86 license. AMD then responds by revoking Intel's x86-64 license.

I'm not sure what happens after that but I bet it'd be funny as long as you don't work in tech.

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
I don't know what would happen exactly, but I do know that licensing wars are good and easy to win.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Simple, Intel would go back to their traditional business of selling memory.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Digitimes posted:

ASMedia has landed orders for AMD's B550 and A520 chipsets that support PCIe 3.0 and will kick off shipments to motherboard ODMs and OEMs in the fourth quarter of 2019.
So if B550 doesn't start shipping to OEM's before Q4 2019, I guess that means B550 won't reach consumers before early 2020? Oh well, X470 and B450 is pretty much good enough anyway.

Adbot
ADBOT LOVES YOU

RME
Feb 20, 2012

Has AMD indicated that Zen2 would have their TSX equivalent?
It's a really marginal use case but the PS3 emulator can actually leverage it for notable performance boosts, but instruction sets don't exactly build hype when you're trying to sell your new stack

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply