Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



wargames posted:

I think its the multiple menus/expo in fighting with one another one, or multiple areas adding an offset. like expo is adding 0.2v offset and something else is adding another 0.1v offset or something.
That probably is it, but why are there multiple places to set it in the first place?

One of the most important things about writing configuration into software is for there to only be one place to configure things; on some sort of non-volatile storage.
You can then either restart the software, or in some instances cause it to reload itself, based on that configuration. Runtime configuration is always the bad option. It doesn't matter if it's user-space, kernel-space, or firmware - and it doesn't even matter if it's obfuscated like it is in Windows or macOS.

Klyith posted:

Pretty much yes.

The problem for any chipmaker like AMD is that the big customers, if they want an ARM server, can make it themselves. Amazon did, Google is doing so, and even MS supposedly has a team poking at it.

And that's not just AMD. Samsung has occasionally made noise about doing ARM server CPUs, and there have been a bunch of smaller companies that come in with a splashy announcement about their new server chips. Some of them have been successful for a generation or two, but the ARM market is insanely competitive.
None of them made their own CPU, though.

They took a Neoverse IP core from ARM, because that's what AWS Gravitron is, and Googles butt service is just a Altra (Max?) from Ampere.
They're just paying licensing costs to ARM, instead of paying for CPUs from AMD or Intel.

If you hate money, you can buy one from Gigabyte with 256 cores and no SMT.

Adbot
ADBOT LOVES YOU

hobbesmaster
Jan 28, 2008

Twerk from Home posted:

Did AMD fully shut down / sell off / give up at making ARM CPUs? I feel like Ampere, AWS, and Apple have already done the hard work, and software portability to ARM is the best that it's ever been.

Klyith posted:

Pretty much yes.

So… the question here is what do you mean by “AMD”, “shut down” and ARM CPUs. I don’t recall and a quick search didn’t reveal anything other than AMD saying that their custom silicon solutions can use ARM processors.

Anyway… Xilinx is AMD and sells plenty of MPSoCs which are 2-6 core ARM SoCs with an FPGA.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
https://www.theregister.com/2022/06/20/jim_keller_arm_cpu/

They had a ARM project that got canned when Keller left.

wargames
Mar 16, 2008

official yospos cat censor

Twerk from Home posted:

Did AMD fully shut down / sell off / give up at making ARM CPUs? I feel like Ampere, AWS, and Apple have already done the hard work, and software portability to ARM is the best that it's ever been.

ARM may not be the outright fastest option for single-threaded work, but my understanding is that there is some power efficiency to be had and ARM on a modern node like TSMC 5nm beats both current Epyc and Xeon for power efficiency. Either that, or AWS is running their ARM pricing as a loss leader right now.

They sold off during the bad time they sold off their arm stuff, they are however using arm on their x86 cpus as a security enclave.

Cygni
Nov 12, 2005

raring to post

ConanTheLibrarian posted:

I know Nintendo don't target high specs but that seems like a really underpowered APU.

Assuming they go with the 1024 cuda core version of the Orin like the rumors say, it is a pretty significant step up from the X1. Likely at least 3x the GPU performance, and its all but certain that they are going to lean heavily on DLSS. The Switch launched at $299 with an included dock, and I imagine Nvidia would like to land there again.

hobbesmaster
Jan 28, 2008

Pablo Bluth posted:

https://www.theregister.com/2022/06/20/jim_keller_arm_cpu/

They had a ARM project that got canned when Keller left.

Pablo Bluth posted:

https://www.theregister.com/2022/06/20/jim_keller_arm_cpu/

They had a ARM project that got canned when Keller left.

quote:

In the talk, available online via YouTube, Keller discusses how when planning the Zen 3 core – now at the heart of AMD's "Milan" Epyc processor chips – he and other engineers realized that much of the architecture was very similar for Arm and X86 "because all modern computers are actually RISC machines inside," and hence according to Keller, "the only blocks you have to change are the [instruction] decoders, so we were looking to build a computer that could do either, although they stupidly cancelled that project."

That project was apparently the K12, which was planned to be AMD's first custom microarchitecture based on the 64-bit ARMv8-A instruction set, and would have led to chips that would follow on after the Opteron A1100 series chips, which were based on Arm's Cortex-A57 core designs.

The question would be what is the value add over the ARM IP cores.

wargames
Mar 16, 2008

official yospos cat censor

BlankSystemDaemon posted:

That probably is it, but why are there multiple places to set it in the first place?

One of the most important things about writing configuration into software is for there to only be one place to configure things; on some sort of non-volatile storage.
You can then either restart the software, or in some instances cause it to reload itself, based on that configuration. Runtime configuration is always the bad option. It doesn't matter if it's user-space, kernel-space, or firmware - and it doesn't even matter if it's obfuscated like it is in Windows or macOS.


Because they had a suppppper talented bios writer/programer leave like 4-5 years ago because of lack of raises if i remember right and why would you pay your employees what they are worth?

Fabulousity
Dec 29, 2008

Number One I order you to take a number two.

wargames posted:

Because they had a suppppper talented bios writer/programer leave like 4-5 years ago because of lack of raises if i remember right and why would you pay your employees what they are worth?

Hey fella some of those poor executives are struggling to pay third mortgages and boat loans! Won't somebody think of the C suite?

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

repiv posted:

the GBA, DS, 3DS, wii and wii u could all play previous generation titles with near-100% compatibility, and the switch gets a pass since the form factor change made backward compatibility impractical

And the Gameboy Color, that could also play Gameboy games. There was also a Gameboy adapter for the SNES & GameCube. As you say, Nintendo definitely has a legacy of reasonable backwards compat

repiv
Aug 13, 2009

i think people tend to think of the GBC as a "pro" refresh of the gameboy, not a distinct generation, likewise with the DSi

but yeah more often than not nintendo has made their systems backwards compatible

Dr. Video Games 0031
Jul 17, 2004

Cygni posted:

Assuming they go with the 1024 cuda core version of the Orin like the rumors say, it is a pretty significant step up from the X1. Likely at least 3x the GPU performance, and its all but certain that they are going to lean heavily on DLSS. The Switch launched at $299 with an included dock, and I imagine Nvidia would like to land there again.

A 3 - 4x boost to total system performance is about what I'm expecting, which is kind of disappointing to be honest, but it is what it is. I think the switch maxes out at 10 watts, docked. If Nintendo wants to meet that same target again, then a 3x boost to performance is probably the best you can hope for, and it might even be a little lower. It's all about the efficiency improvements going from Maxwell on TSMC 20nm to Ampere on Samsung 8nm, which is probably less than you'd think. But if Nintendo decides to up the power budget, then that changes things. If they really wanted to, they could do a much bigger difference between docked and handheld power targets. Doing 20 - 30 watts docked instead of 10 would help them output to 4K TVs using DLSS. The catch is that they'd have to make the handheld bulkier to accommodate a better cooling solution, but I think the ROG Ally is showing that decent cooling can be had in an acceptable form factor (youtuber hands-on impressions said the device wasn't very loud at 30 watts)

edit: why are we talking about this in the amd cpu thread again?

Dr. Video Games 0031 fucked around with this message at 21:17 on May 4, 2023

Cygni
Nov 12, 2005

raring to post

Dr. Video Games 0031 posted:

Doing 20 - 30 watts docked instead of 10 would help them output to 4K TVs using DLSS. The catch is that they'd have to make the handheld bulkier to accommodate a better cooling solution, but I think the ROG Ally is showing that decent cooling can be had in an acceptable form factor (youtuber hands-on impressions said the device wasn't very loud at 30 watts)

Build a 120mm fan and a baffle into the dock, baby :twisted:

yummycheese
Mar 28, 2004

Klyith posted:

Pretty much yes.

The problem for any chipmaker like AMD is that the big customers, if they want an ARM server, can make it themselves. Amazon did, Google is doing so, and even MS supposedly has a team poking at it.

Yep. MS has farmed out a custom ARM cpu to marvell to design and manufacture. doesn’t have to be to fancy. just more cost efficient and cheaper than x86 from the big vendors.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Klyith posted:


And that's not just AMD. Samsung has occasionally made noise about doing ARM server CPUs, and there have been a bunch of smaller companies that come in with a splashy announcement about their new server chips. Some of them have been successful for a generation or two, but the ARM market is insanely competitive.

The insanely competitive state of the ARM market seems like a biiiig problem for the x86 vendors. ARM doesn't have to win in every way or all of the time to take a big chunk out of Intel and AMD's highest margin server market, and the competition in the ARM space means that they are iterating and evolving really fast.

Over the last few years it's gone from "yeah ARM may be efficient but it doesn't clock high or perform well" to "Sure ARM has decent performance but none of the platforms have a lot of I/O or memory bandwidth" to "Well ARM doesn't have wide vector extensions and can't compete on vectorized workloads". Now SVE is here and ARM pricing is competitive even for small time purchasers.

If your stuff can run on ARM it's cheaper to do so on AWS now, even if performance is a little worse. You can buy those Ampere Altra boxes from several vendors for reasonable prices. ARM is here not to conquer all spaces, but to eat away at both AMD and Intel profits.

Klyith
Aug 3, 2007

GBS Pledge Week

Twerk from Home posted:

The insanely competitive state of the ARM market seems like a biiiig problem for the x86 vendors. ARM doesn't have to win in every way or all of the time to take a big chunk out of Intel and AMD's highest margin server market

Could very well be, but I don't know and am not quite ready to go chicken little just yet. The highest margin space for AMD and Intel is still the HPC stuff.

An alternate hypothesis is like, for a long long time power efficiency wasn't the biggest demand from server customers. It is now because the clown succeeded in eating the world, and is now competing against itself and hunting for more profit margin. ARM was much better positioned to take advantage of that change in emphasis than x86.


And in the non-server space, it's still ... very difficult to get good comparisons for gaming performance between normal enthusiast PCs and high end ARM (ie M1/2).

Twerk from Home posted:

The insanely competitive state of the ARM market seems like a biiiig problem for the x86 vendors. ARM doesn't have to win in every way or all of the time to take a big chunk out of Intel and AMD's highest margin server market

Personally I'm happy for ARM to be competitive in areas besides cell phones and chromebooks. The stretch where AMD poo poo the bed and we had near-total Intel monopoly was the worst. Now even if one x86 company goes kablooie they'll still have to watch their back and compete.

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

The insanely competitive state of the ARM market seems like a biiiig problem for the x86 vendors. ARM doesn't have to win in every way or all of the time to take a big chunk out of Intel and AMD's highest margin server market, and the competition in the ARM space means that they are iterating and evolving really fast.

Over the last few years it's gone from "yeah ARM may be efficient but it doesn't clock high or perform well" to "Sure ARM has decent performance but none of the platforms have a lot of I/O or memory bandwidth" to "Well ARM doesn't have wide vector extensions and can't compete on vectorized workloads". Now SVE is here and ARM pricing is competitive even for small time purchasers.

If your stuff can run on ARM it's cheaper to do so on AWS now, even if performance is a little worse. You can buy those Ampere Altra boxes from several vendors for reasonable prices. ARM is here not to conquer all spaces, but to eat away at both AMD and Intel profits.
The Altra Max has 128 cores at a base clock of 3GHz.

It's considerably better clocked than the highest core count processors from AMD and Intel, which have 96 cores at 2.4GHz base clock (3.55GHz all-core boost, hypothetically) and 60 cores at 1.9GHz base clock (3.5GHz all-core boost, hypothetically), respectively.
All-core boost is hypothetical because no HPC cluster I know of enables turbo-boost, as the systems aren't validated for it. They also disable SMT, so the number of threads AMD and Intel have is irrelevant.

Bloodplay it again
Aug 25, 2003

Oh, Dee, you card. :-*
Nearly six months to the day after building a system with the Gigabyte B650E Aorus Master, I think the board might be toast. Aside from a weird integrated graphics issue I posted about somewhat recently, it has been performing well without issues.

I tried restarting my PC a bit ago and it refuses to boot. Sits at code 15 (north bridge mem init) for a bit for the training, eventually cycles over to 97 (console output devices connected) for 20 sec where I get GPU output to the monitor (just a blinking underscore) and then it restarts again. Oddly, even when I hold power to force power off, the RGB on the motherboard is still lit. Historically, it has turned off when the PC is off. I can't even get into the BIOS anymore. Gonna try to clear CMOS, I guess. :(

edit:

Okay, after resetting CMOS, the computer restarted 3 times in a row before outputting any video. Finally, I saw the AORUS splash screen on the third boot and could get into the BIOS. After being greeted by the CMOS reset screen, I enabled advanced settings, turned on the EXPO profile (still states 1.3v in BIOS, HWinfo says 1.245v), disabled memory context restore, disabled SATA hotswap, disabled Gigabyte bloatware installer, and disabled integrated graphics. It still took a couple of restarts, but I knew I was making headway because at least the splash screen was appearing whenever it rebooted. Thankfully, after about an hour of futzing around, Windows is back up and running.

I am still not sure what the issue could have been. Seems CMOS corrupted itself? I'm glad it's up and running again and can only hope it's not a sign of issues to come. My fans were going 100% as the PC booted and then they'd all basically turn off as it hung on qcode 97, kicking back up to 100% at reboot. My best educated guess is that it corrupted itself during the restart, as it wasn't ever disconnected from AC power and there's also no way the cr2032 battery on the board is dead.

Bloodplay it again fucked around with this message at 11:54 on May 5, 2023

Cygni
Nov 12, 2005

raring to post

Quote from Lisa Su on the Moore's Law debates going on (Jenson said its dead, Gelsinger said its not):

Lisa Su posted:

I would certainly say I don’t think Moore’s Law is dead. I think Moore’s Law has slowed down. We have to do different things to continue to get that performance and that energy efficiency. We’ve done chiplets—that’s been one big step. We’ve now done 3-D packaging. We think there are a number of other innovations, as well. Software and algorithms are also quite important. I think you need all of these pieces for us to continue this performance trajectory that we’ve all been on.

...

Yes. The transistor costs and the amount of improvement you’re getting from density and overall energy reduction is less from each generation. But we’re still moving [forward] generation to generation. We’re doing plenty of work in 3 nanometer today, and we’re looking beyond that to 2 nm as well. But we’ll continue to use chiplets and these type of constructions to try to get around some of the Moore’s Law challenges.

So basically I guess her take is "Moore's Law isn't dead, but yes it is dead". Really a lot of the discussion seems to be people using "Moore's Law" to mean everything from the original transistor counts will double definition (which Intel is still using), to redefining it as ICs will continue to get faster through one pathway or another (AMD's definition apparently) or continue to get cheaper at a given performance level (Nvidia's definition apparently).

It seems everyone fundamentally agrees that cheap and cheerful process improvements are over, though.

Dr. Video Games 0031
Jul 17, 2004

AMD's definition seems even vaguer, considering they're throwing software into the mix with that statement. Now it's just "computer performance will continue to improve in some capacity going forward." Which... cool? Very insightful.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I took the software mention to be a reference to software possibly needing to be changed in order to make the most of hardware improvements. We saw this with multi-core pushing people into threading, and cache-oblivious algorithms, etc.

BlankSystemDaemon
Mar 13, 2009



Moore's Law is dead because Wright predicted it much more accurately long before.

Wibla
Feb 16, 2011

Cygni posted:

Quote from Lisa Su on the Moore's Law debates going on (Jenson said its dead, Gelsinger said its not):

So basically I guess her take is "Moore's Law isn't dead, but yes it is dead". Really a lot of the discussion seems to be people using "Moore's Law" to mean everything from the original transistor counts will double definition (which Intel is still using), to redefining it as ICs will continue to get faster through one pathway or another (AMD's definition apparently) or continue to get cheaper at a given performance level (Nvidia's definition apparently).

It seems everyone fundamentally agrees that cheap and cheerful process improvements are over, though.

The bit about Nvidia :ironicat:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The gently caress did "AMD Software Adrenalin Edition" or whatever the gently caress it was install itself here? I don't have a Radeon card, nor did I consent to it installing. --edit: At least the annoying Razer stuff, that was regularly triggered by that old DeathAdder, had the courtesy to ask whether to continue installing or not.

Combat Pretzel fucked around with this message at 23:27 on May 5, 2023

Dr. Video Games 0031
Jul 17, 2004

But you do have a Radeon GPU in your CPU. It's actually a really good software suite that somehow doesn't bog down your computer like everyone else's software suite, but I don't know why it's installing automatically on your system.

Kibner
Oct 21, 2008

Acguy Supremacy

Combat Pretzel posted:

The gently caress did "AMD Software Adrenalin Edition" or whatever the gently caress it was install itself here? I don't have a Radeon card, nor did I consent to it installing.

If you have a 7000 series cpu, it comes with an igpu and that's what those gpu drivers are for.

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

The gently caress did "AMD Software Adrenalin Edition" or whatever the gently caress it was install itself here? I don't have a Radeon card, nor did I consent to it installing. --edit: At least the annoying Razer stuff, that was regularly triggered by that old DeathAdder, had the courtesy to ask whether to continue installing or not.

If you are annoyed by windows installing drivers you didn't ask for, you can fix that with settings -> search for "device installation" -> change to No.

Anime Schoolgirl
Nov 28, 2002

Glofo wafer agreement rearing its ugly head again:

https://videocardz.com/newz/amd-reportedly-resumes-production-of-ryzen-3000g-series

Cygni
Nov 12, 2005

raring to post

Wibla posted:

The bit about Nvidia :ironicat:

Nvidia believes (their definition of) Moore's Law is dead though, so they believe that future performance increases will require more silicon costs and higher prices. Which is what we've been seeing from them, so it tracks.

SwissArmyDruid
Feb 14, 2014

by sebmojo

My interpretation: Moore's Law isn't dead, but it's in the hospital in the ICU on life support. The way that the semiconductor industry navigates the next few process nodes will determine whether or not it actually dies, or if it gets discharged from the hospital to go back into hospice care at home.

Yudo
May 15, 2003

I have an idea that will raise yields, lower costs, and navigate around several of the physical limitations on what are more or less stagnant process nodes: chiplets. If only someone would come up with some kind of bus to link them together...

Shipon
Nov 7, 2005

SwissArmyDruid posted:

My interpretation: Moore's Law isn't dead, but it's in the hospital in the ICU on life support. The way that the semiconductor industry navigates the next few process nodes will determine whether or not it actually dies, or if it gets discharged from the hospital to go back into hospice care at home.

the real question is what does the whole economy do when it can't just continue to throw money at the tech sector relying on exponentially increasing compute power, but this is probably the wrong place to ask that

wargames
Mar 16, 2008

official yospos cat censor

but wafer agreement is fulfilled by having glofo make all the I/O die on their 12nm node.

kliras
Mar 27, 2021
as said before, be ... extra careful with downloading bios updates, especially for msi motherboards

https://twitter.com/tomshardware/status/1644350198276149248

https://twitter.com/PCMag/status/1654535238779895808

https://twitter.com/hn_frontpage/status/1654785723478732800

Kazinsal
Dec 13, 2011
as if this platform wasn't loving cursed enough

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Kibner posted:

If you have a 7000 series cpu, it comes with an igpu and that's what those gpu drivers are for.
Yeah, but this thing is up and running for 1.5 months already. A bit late, no?

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

Cygni posted:

Nvidia believes (their definition of) Moore's Law is dead though, so they believe that future performance increases will require more silicon costs and higher prices. Which is what we've been seeing from them, so it tracks.
As much as it sucks, they're not wrong. It really is much more expensive to use a newer node:


(from https://www.techpowerup.com/274720/tsmc-achieves-major-breakthrough-in-2-nm-manufacturing-process-risk-production-in-2023)

3nm is something like 25-30% more expensive than 5nm iirc. Plus there's ugly news like this: https://www.phonearena.com/news/tsmc-us-chips-30-percent-premium_id147305

wargames posted:

but wafer agreement is fulfilled by having glofo make all the I/O die on their 12nm node.

Zen 4 doesn't use 12nm for the IO chiplet so AMD may need to make up the numbers by ordering other chips. They extended the wafer supply agreement in 2021, so it's a problem of their own making: https://www.anandtech.com/show/17132/amd-and-globalfoundries-wafer-supply-agreement-updated-once-more-now-21b-through-2025

wargames
Mar 16, 2008

official yospos cat censor

ConanTheLibrarian posted:


Zen 4 doesn't use 12nm for the IO chiplet so AMD may need to make up the numbers by ordering other chips. They extended the wafer supply agreement in 2021, so it's a problem of their own making: https://www.anandtech.com/show/17132/amd-and-globalfoundries-wafer-supply-agreement-updated-once-more-now-21b-through-2025

are they using 7nm for the i/o die?

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
TSMC N6, so yeah basically.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

The gently caress did "AMD Software Adrenalin Edition" or whatever the gently caress it was install itself here? I don't have a Radeon card, nor did I consent to it installing. --edit: At least the annoying Razer stuff, that was regularly triggered by that old DeathAdder, had the courtesy to ask whether to continue installing or not.
Motherboard manufacturers have the option of adding software that gets installed as a value-add, I think.
Most don't implement it, but just in case - which vendor + model do you have?

It could also just be Windows Update, which you should be able to find in the log files, via Event Viewer: Applications and Service Logs\Microsoft\Windows\WindowsUpdateClient\Operational

Adbot
ADBOT LOVES YOU

Enos Cabell
Nov 3, 2004


New Gigabyte F8d bios today, but still on AGESA 1.0.0.6. It does explicitly mention the SOC voltage changes though.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply