Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SoftNum
Mar 31, 2011

I just got done setting up Passthrough on my Ryzen 1700X system and there were no issues. It runs great now, both systems have been stable for a couple days, and (along with passing through an SSD just for windows) I get near native performance out of the system on the benchmarks I've run.

I had enlightenment crash on me a few times, but I switched to i3 and it's been stable, so I think that's e's fault.

I also have yet to run into the stack-crash cache error some people talk about.

Adbot
ADBOT LOVES YOU

PC LOAD LETTER
May 23, 2005
WTF?!

Munkeymon posted:

Yeah, and I was saying the opposite of that
My point was it wouldn't have mattered. Itanium could've been everywhere and the compilers to make it work well still would never have appeared. Its a fundamental problem that can't be fixed by marketshare or part volume sales so whether Itanium took off or not is a moot issue when talking about compiler development.

Truga
May 4, 2014
Lipstick Apathy

SoftNum posted:

I had enlightenment crash on me a few times, but I switched to i3 and it's been stable, so I think that's e's fault.

As an e user for over a decade, switch to kde. There's probably a kwinscript now that does the window management thing you want from your WM, and having a proper, working DE is real nice. E used to be fast and very nice back around e17 alpha/beta releases and when it did crash once per month, it'd just restart itself and it didn't show anywhere, but now it's a loving tyre fire.

The main dev is a huge idiot too, so it's only ever going to get worse, never better. :negative:

Mr Shiny Pants
Nov 12, 2012
When all the big Unix vendors gave up on competing against Windows NT, x86 would always win. Just by virtue of running the most ubiquitous Network Operating System and client OS.

NewFatMike
Jun 11, 2015

SoftNum posted:

I just got done setting up Passthrough on my Ryzen 1700X system and there were no issues. It runs great now, both systems have been stable for a couple days, and (along with passing through an SSD just for windows) I get near native performance out of the system on the benchmarks I've run.

I had enlightenment crash on me a few times, but I switched to i3 and it's been stable, so I think that's e's fault.

I also have yet to run into the stack-crash cache error some people talk about.

Oooh gaming benches looking good or productivity ones?

SoftNum
Mar 31, 2011

NewFatMike posted:

Oooh gaming benches looking good or productivity ones?

I used Heaven informally and got 60-70 fps at ultra at 1440p. I'm going to run some more "scientific" passes this weekend and post results.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



PC LOAD LETTER posted:

My point was it wouldn't have mattered. Itanium could've been everywhere and the compilers to make it work well still would never have appeared. Its a fundamental problem that can't be fixed by marketshare or part volume sales so whether Itanium took off or not is a moot issue when talking about compiler development.

Guess I just don't believe that without some proof that it's impossible to write a decent compiler for it. Maybe more effort went into it that I'm not aware of?

feedmegin
Jul 30, 2008

Munkeymon posted:

Guess I just don't believe that without some proof that it's impossible to write a decent compiler for it. Maybe more effort went into it that I'm not aware of?

Hi I write compilers and things for fun and do low-level software dev for :10bux:

The thing with the Itanium is that it was specifically intended to put a lot of the complexity that currently runs in a modern out-of-order CPU at runtime, onto the compiler instead, and it has a lot of hardware features intended to support this. The idea is you can then make the CPU much simpler, which means you can then make it go much faster. Two problems here, though:

a) Itanium was a collaboration between two companies, Intel which is traditionally poo poo at inventing new CPU architectures that aren't x86, and HP, who threw a poo poo ton of stuff in from PA-RISC, their previous CPU design. Itanium did not end up simple. It also ended up being released years later than planned, at which point x86 clock rates had increased a lot because Moores Law was still a thing in the 90s.

b) The people who came up with this cunning plan were mostly hardware guys. They anticipated compilers getting a lot smarter to make all this poo poo work. Compilers didn't, because this sort of optimisation is loving hard. Plus the fundamental disconnect between an OoO CPU that can see that actual things that are going on right now and schedule instructions etc optimally, and a compiler that has to see into the future and guess what's going to happen ahead of time. Itanium depending on compiler researchers to invent literal magic to fulfil its potential, and that never happened.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
https://www.phoronix.com/scan.php?page=article&item=new-ryzen-fixed&num=1

It appears that some Ryzen CPUs shipped with a flaw which was causing the segfault issue. However, it did not affect the majority of CPUs and CPUs after week 30 do not have the defect at all. It's likely AMD started junking this processors once they had volume up enough but early on where included to make sure they shipped enough to retailers. It's also probably why EPYC and TR never displayed such behavior nor likely would as top bins. My own opinion, not drawn from the article but based on reports of people who where able to get even flawed CPUs to stop segfaulting, the defect seems to be centered around how Ryzen handles micro-op cache and SMT and only happens when fully loaded.

EDIT: I should also point out that AMD is replacing the CPUs for affected users. Why is the CPU division so much more loving competent?

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

FaustianQ posted:

EDIT: I should also point out that AMD is replacing the CPUs for affected users. Why is the CPU division so much more loving competent?

Because they can smell blood in the water.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



feedmegin posted:

Plus the fundamental disconnect between an OoO CPU that can see that actual things that are going on right now and schedule instructions etc optimally, and a compiler that has to see into the future and guess what's going to happen ahead of time. Itanium depending on compiler researchers to invent literal magic to fulfil its potential, and that never happened.

I forgot about that! I get it now. You'd need to somehow software JIT machine code to get it as close to optimal as an x86's internal dispatcher, basically, right?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

FaustianQ posted:

https://www.phoronix.com/scan.php?page=article&item=new-ryzen-fixed&num=1

It appears that some Ryzen CPUs shipped with a flaw which was causing the segfault issue. However, it did not affect the majority of CPUs and CPUs after week 30 do not have the defect at all. It's likely AMD started junking this processors once they had volume up enough but early on where included to make sure they shipped enough to retailers. It's also probably why EPYC and TR never displayed such behavior nor likely would as top bins. My own opinion, not drawn from the article but based on reports of people who where able to get even flawed CPUs to stop segfaulting, the defect seems to be centered around how Ryzen handles micro-op cache and SMT and only happens when fully loaded.

EDIT: I should also point out that AMD is replacing the CPUs for affected users. Why is the CPU division so much more loving competent?

"After week 30" is a cute euphemism, another way to put that would be "after July 29". i.e. once you factor in packaging and distribution time, any CPU that you didn't purchase literally today from a retailer with high turnover is suspect.

edit: and to be clear this is not just a Linux problem, this also manifests on the Windows 10 Linux Subsystem and also some people have reported intermittent segfaults on native Windows applications as well, it's just that Linux is currently the easiest way to reproduce this problem. Even if you never intend to use Linux you should still run the check just to be sure - and do it for a serious amount of time, some people report it taking 24 hours or more before it finally trips.

I agree that it looks like some kind of a cache or execution-unit defect due to manufacturing flaws, although it's gotta be veeeeerrrryyy minor otherwise the processor it would poo poo itself nonstop (just like if you had a bad RAM stick or whatever). It also must be localized, otherwise the flaw would manifest in lots of different ways. Seems like it may just be some "fuzzy transistors" that sometimes don't come out right in a particularly sensitive area of the die?

Paul MaudDib fucked around with this message at 19:34 on Aug 25, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

feedmegin posted:

b) The people who came up with this cunning plan were mostly hardware guys. They anticipated compilers getting a lot smarter to make all this poo poo work.

That feel when you toss the hard problems over the fence and make someone else deal with them :feelsgood:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Actually the Phoronix article specifically notes that AMD hasn't commented, and that the assumption it's fixed is based on user reports via forum posts (lol). It may actually be a bit early to label this one "fixed" at all, especially since it takes time for inventory to work through the supply chain.

This bug doesn't affect 100% of Ryzen processors, it seems possible that Larabel just got "lucky" and his first one had the issue and his second one doesn't.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I don't know why I missed that, seems kind of disingenuous to make such a declarative statement when it could mean that instead of 33% of processors affected it's 10% but the bug still exists. It's still good on AMD that they'll replace your defective CPU on this issue, and I can kind of understand why they'd rather just strangle this problem quietly in the crib, especially if it's difficult to detect without more expensive and time consuming validation.This doesn't seem like something they'd be able to "fix" as if it's a minor flaw in a few tens of transistors then that's just a manufacturing flaw and you need to junk it. For most users, this doesn't seem to be an issue and it just means lost sales.

Rather I think AMD will continue to let this one slide as long as they can hit 90/10% on it and will validate for the bug on PRO, TR and EPYC, while some still filter down into consumer grade chips and quietly replace them for any user that requests RMA. They really don't lose out this way, as the chip they RMA will just go through validation and repackaging again for some OEM. Basically they can turn it into a nonissue.

GRINDCORE MEGGIDO
Feb 28, 1985


So the "fixed" chips - are they all b2 stepping?

GRINDCORE MEGGIDO fucked around with this message at 00:10 on Aug 26, 2017

PC LOAD LETTER
May 23, 2005
WTF?!

Munkeymon posted:

You'd need to somehow software JIT machine code to get it as close to optimal as an x86's internal dispatcher, basically, right?
I remember it as being worse than that (not a compiler researcher here, just remember lots of people's comments at the time about its viability and problems) but essentially yes.

Somehow the compilers were supposed to have kept nearly every internal hardware feature and resource running at near peak usage rates all the time to get the performance (IIRC around triple the IPC of x86 though in a round about way since the design was focused on TLP instead) Intel was expecting hence the denigration "magic compilers" is both a way of pooping on that approach and a literal description of what would've been required.

edit: they do and have gotten better since that time but the pace of improvement is relatively glacial compared to what was expected. The magic ones Intel was expecting with Itanium would probably appear right around the time we get true AI worked out.\/\/\/\/\/\/\/

PC LOAD LETTER fucked around with this message at 00:34 on Aug 26, 2017

Mr Shiny Pants
Nov 12, 2012
So the next question: Have compilers gotten better or is this something that was never going to happen?

EoRaptor
Sep 13, 2003

by Fluffdaddy

Mr Shiny Pants posted:

So the next question: Have compilers gotten better or is this something that was never going to happen?

Compilers have gotten better (In fact, a lot better, LLVM was/is a major advance in compiler design), however it turns out what itanium needed from a compiler was perfect knowledge of all possible operations an application could perform and all the CPU states that would result, which even with very simple code seems to be an NP hard problem. Also, compilers are still written by humans and aren't capable of the level of 'perfection' needed to even approach what itanium demanded.

In the end, itanium wasn't even a good CPU design. It didn't scale well in speed or performance, and was a dead end for most CPU applications, which are dominated by end users.

NewFatMike
Jun 11, 2015

Alternate architecture chat is making me think about x86 emulation running on ARM that got Intel all in a tizzy earlier this year.

Would be cool to have AMD and Qualcomm competing with Intel. But also rip in peace AMD for selling Adreno.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

NewFatMike posted:

Alternate architecture chat is making me think about x86 emulation running on ARM that got Intel all in a tizzy earlier this year.

Would be cool to have AMD and Qualcomm competing with Intel. But also rip in peace AMD for selling Adreno.

AMD would have stayed afloat on licensing for Adreno alone, and PowerVR would likely never gotten any contracts with Apple.

sauer kraut
Oct 2, 2004

GRINDCORE MEGGIDO posted:

So the "fixed" chips - are they all b2 stepping?

Nah, from what I read you'd have to scrape off the thermal paste and look at the production date code on your CPU.
If it's from week 25 (30?) 2017 or later, it's supposed to be fixed.

NewFatMike
Jun 11, 2015

FaustianQ posted:

AMD would have stayed afloat on licensing for Adreno alone, and PowerVR would likely never gotten any contracts with Apple.

Was that another genius Ruiz decision?

SoftNum
Mar 31, 2011

SoftNum posted:

I used Heaven informally and got 60-70 fps at ultra at 1440p. I'm going to run some more "scientific" passes this weekend and post results.

Yeah OK I spoke too soon. I can't get the CPU config into a state that it performs. the GPU is great (DX10, DX11) but anything that ends up CPU bound runs like dog doodoo. I don't think it's the NPT bug because the CPU itself is fine, it seems like it's a memory bandwidth issue, at least partially. I'm going to try a streamlined config tomorrow (pulling out all the bullshit virt-manager sets up. Who the gently caress needs an IDE controller?)

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

NewFatMike posted:

Alternate architecture chat is making me think about x86 emulation running on ARM that got Intel all in a tizzy earlier this year.

Would be cool to have AMD and Qualcomm competing with Intel. But also rip in peace AMD for selling Adreno.

I mean, that's been a thing for a long time. Its problem is that it compounds existing ARM performance issues by the speed penalty emulation imposes.

EoRaptor
Sep 13, 2003

by Fluffdaddy

fishmech posted:

NewFatMike posted:

Alternate architecture chat is making me think about x86 emulation running on ARM that got Intel all in a tizzy earlier this year.

Would be cool to have AMD and Qualcomm competing with Intel. But also rip in peace AMD for selling Adreno.

I mean, that's been a thing for a long time. Its problem is that it compounds existing ARM performance issues by the speed penalty emulation imposes.

Transmeta spent a lot of time and money proving x86 emulation isn't worth it, and they weren't stupid or lazy people. Far better to push towards more universal API's / foundations and work on compiler platform/CPU targeting.

Yaoi Gagarin
Feb 20, 2014

EoRaptor posted:

Compilers have gotten better (In fact, a lot better, LLVM was/is a major advance in compiler design), however it turns out what itanium needed from a compiler was perfect knowledge of all possible operations an application could perform and all the CPU states that would result, which even with very simple code seems to be an NP hard problem. Also, compilers are still written by humans and aren't capable of the level of 'perfection' needed to even approach what itanium demanded.

In the end, itanium wasn't even a good CPU design. It didn't scale well in speed or performance, and was a dead end for most CPU applications, which are dominated by end users.

I agree with what you're saying about itanium but I'm curious why you say LLVM is a major advance in compiler design. To my knowledge its backend isn't as good as GCC or (some) commerical compilers

Khorne
May 1, 2002

VostokProgram posted:

I agree with what you're saying about itanium but I'm curious why you say LLVM is a major advance in compiler design. To my knowledge its backend isn't as good as GCC or (some) commerical compilers
LLVM has explicit frontend, intermediate, and platform-specific backend parts. While that's how "all" compilers work internally, there's not nearly as distinct of a separation in something like GCC. GCC is like v1 of your compiler tech, then you over engineer a v2 thinking "I can keep the functionality identical but rewrite this so any language could be tacked on front, any backend added seamlessly, and the intermediate code is going to be an open, universal standard usable by anyone who wants to use it!" Except LLVM isn't your over engineered goon project that gets 0.5 views/mo and will never be reused by you later despite you writing it that way.

If LLVM had a marketing team it'd be called a "next generation" compiler. Not because of what it outputs, which your post is mostly correct about for C/C++ performance, but because of how it is designed. It has a bright future, because developing a "backend" to support a platform means all languages with an LLVM frontend are now available on that platform. Building a language frontend that compiles to LLVM IR means you now support all optimizations and backends present in the LLVM ecosystem. It's a genuine step forward.

There are other cool parts, like lld and BSD license. My post doesn't give a completely accurate view of LLVM either, and people can feel free to correct me or say other cool stuff. But, it hopefully gives some idea about why LLVM is good even if it's not the [your language] compiler you should use right now.

Khorne fucked around with this message at 14:24 on Aug 26, 2017

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
So strangely my 3000MHZ QVL RAM that I could only get to run at 2800, decided this morning after a week or so of being happy, to suddenly stop booting and require a drop to 2600. That finally got me having a look at timings and voltages . Bumping the memory voltage from 1.35 to 1.37 has finally got it booted at 3000 and so far not giving any errors in Prime95-Blend.

feedmegin
Jul 30, 2008

Khorne posted:

LLVM has explicit frontend, intermediate, and platform-specific backend parts. While that's how "all" compilers work internally, there's not nearly as distinct of a separation in something like GCC. GCC is like v1 of your compiler tech, then you over engineer a v2 thinking "I can keep the functionality identical but rewrite this so any language could be tacked on front, any backend added seamlessly, and the intermediate code is going to be an open, universal standard usable by anyone who wants to use it!" Except LLVM isn't your over engineered goon project that gets 0.5 views/mo and will never be reused by you later despite you writing it that way.

If LLVM had a marketing team it'd be called a "next generation" compiler. Not because of what it outputs, which your post is mostly correct about for C/C++ performance, but because of how it is designed. It has a bright future, because developing a "backend" to support a platform means all languages with an LLVM frontend are now available on that platform. Building a language frontend that compiles to LLVM IR means you now support all optimizations and backends present in the LLVM ecosystem. It's a genuine step forward.

There are other cool parts, like lld and BSD license. My post doesn't give a completely accurate view of LLVM either, and people can feel free to correct me or say other cool stuff. But, it hopefully gives some idea about why LLVM is good even if it's not the [your language] compiler you should use right now.

All this is true, and nice, and great; LLVM is v good if you want to quickly create a language that more or less targets the C paradigm (in the sense of: native code, statically compiled (it can JIT but afaik it's not that fast at it), conventional stack-based function calls). Not coincidentally, Apple fund it heavily and use it for Swift, which actually a goon is a compiler engineer on.

None of what you've said addresses optimisation, though, which is what Itanium needed a quantum leap in to be any good. LLVM in theory (if there's an Itanium backend) makes it really easy to make a shiny new compiler/language for Itanium; it doesn't make that language run fast on the Itanium, because while it's no slouch it only has the optimisations that are possible in the real world we mortal humans inhabit.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

HalloKitty posted:

It really didn't. I remember talking to a friend back then and betting ARM would become the next big architecture, even on the desktop, but I never saw the demise of x86, it's just too deeply embedded. It's.. kind of come true. At least ARM is far more successful than Itanium.

Explicitly parallel instruction computing design for CPUs just turned out to be a pretty lovely and unworkable solution and would have sucked even without x86 momentum. ARM on the other hand is inherently better but nowhere near enough to overcome said momentum.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

MaxxBot posted:

Explicitly parallel instruction computing design for CPUs just turned out to be a pretty lovely and unworkable solution and would have sucked even without x86 momentum. ARM on the other hand is inherently better but nowhere near enough to overcome said momentum.

What's supposed to be inherently better about ARM that isn't a failed promise like PA-RISC, PowerPC, Alpha, etc?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Scheduling in the delay slots of PA-RISC was annoying, at least in assembly.

feedmegin
Jul 30, 2008

fishmech posted:

What's supposed to be inherently better about ARM that isn't a failed promise like PA-RISC, PowerPC, Alpha, etc?

...than Itanium? Lots of stuff. In the modern world it's much more of a wash between AArch64 and x86-64, though, I would agree. AArch64 is (naturally) a bit cleaner as an instruction set since it doesn't have as much legacy baggage but at the high end that doesn't matter like it did in the 80s or whatever. On the other hand, a simple RISC instruction set mean ARM scales all the way down to tiny microcontrollers like the Cortex-M0 that x86 (probably 32-bit in this case, so also hit by that horrible lack of GPRs) just can't. A super super basic minimal ARM implementation is inherently less gates than the same x86.

feedmegin fucked around with this message at 17:18 on Aug 27, 2017

Yaoi Gagarin
Feb 20, 2014

Was DEC Alpha supposed to be super badass or something back in the day? I find mention of it in a lot of places but no explanation of why it was so interesting

e: besides it having bizarre super weak memory ordering

Volguus
Mar 3, 2009

VostokProgram posted:

Was DEC Alpha supposed to be super badass or something back in the day? I find mention of it in a lot of places but no explanation of why it was so interesting

e: besides it having bizarre super weak memory ordering

As far as I remember it was a RISC architecture and extremely fast. While I haven't used it personally, the rumor was that WinNT for x86 was faster on a VM on a DEC than on a native x86 cpu.

BlackMK4
Aug 23, 2006

wat.
Megamarm

Pablo Bluth posted:

So strangely my 3000MHZ QVL RAM that I could only get to run at 2800, decided this morning after a week or so of being happy, to suddenly stop booting and require a drop to 2600. That finally got me having a look at timings and voltages . Bumping the memory voltage from 1.35 to 1.37 has finally got it booted at 3000 and so far not giving any errors in Prime95-Blend.

I was wondering why mine wouldn't boot at the rated speed with 1.35v....

SwissArmyDruid
Feb 14, 2014

by sebmojo
1900X spotted:

http://fudzilla.com/news/processors/44389-amd-ryzen-threadripper-1900x-spotted-in-india

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

VostokProgram posted:

Was DEC Alpha supposed to be super badass or something back in the day? I find mention of it in a lot of places but no explanation of why it was so interesting

e: besides it having bizarre super weak memory ordering

Ultimately DEC Alpha was important because DEC was important, and they positioned it as the direct successor to their popular VAX families of processors.

Volguus posted:

As far as I remember it was a RISC architecture and extremely fast. While I haven't used it personally, the rumor was that WinNT for x86 was faster on a VM on a DEC than on a native x86 cpu.

This was true, but there was also that early NT was specifically designed to not favor any particular CPU design, and there was also the fact that x86 processors weren't all that fast themselves back in the day (but conversely, DEC Alpha hardware was hardly inexpensive, and IIRC DEC never brought Alphas down to their "low-end" systems in their heyday). And Alpha support was only in NT 3.1/3.5/4.0 and by 4.0 it was already looking sketchy for the architecture.

I guess you could consider it like, what if the Intel CPU lines currently stopped at the "Pentium" branded chips of today, everything i3 and up and all the Xeons were absent, and they were also a few generations back from current? That's kinda what putting up especially 486s or the early Pentiums against the DEC Alphas was like, with the Alphas taking the role of very high Intel chips of today.

fishmech fucked around with this message at 16:15 on Aug 29, 2017

Adbot
ADBOT LOVES YOU

eames
May 9, 2009

Destiny 2 Beta looking good on a Ryzen 1700 with all its threads, I wonder what Threadripper is like. :banjo:

https://www.youtube.com/watch?v=p4mKq--nQxI&t=375s

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply