Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
orange juche
Mar 14, 2012





mayodreams posted:

Hell, I've been running at 4.3GHz on a i2500k with a Hyper 212 for almost 3 years with zero issues. This machine has been bar none the best I've ever built.

Apparently you can hit almost 4.7 with voltage tweaking before it becomes unstable/too hot. Anandtech was running intelburntest and they hit 4.7ghz at 73c on air, and it held stable.

Adbot
ADBOT LOVES YOU

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

orange juche posted:

Apparently you can hit almost 4.7 with voltage tweaking before it becomes unstable/too hot. Anandtech was running intelburntest and they hit 4.7ghz at 73c on air, and it held stable.

Oh yeah, Sandy Bridge takes it like a champ. But sometimes you stop for noise and power reasons when you don't need to push it. Been running my 2500K @ 4.4 for ages happily.

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva

orange juche posted:

Apparently you can hit almost 4.7 with voltage tweaking before it becomes unstable/too hot. Anandtech was running intelburntest and they hit 4.7ghz at 73c on air, and it held stable.
Really depends on the chip. My 2600K tops out at 4.6ghz with sane voltages. Cooling isn't a problem since the HR-02 keeps things around 66C max for everything except IBT, but any higher clocks need 1.4V or more to keep stable.

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Its a recurring theme that when people ITT want to look at something positive we start talking about Intel.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

keyvin posted:

Its a recurring theme that when people ITT want to look at something positive we start talking about Intel.

Poor AMD, they can't catch a break.

On a positive note, I'll always fondly remember them, like 3Dfx.

My AM386-SX40 was a cheap and fast 386, my K6-200 was decent, and my K6-2 450 overclocked and gave AMD the reputation of being a company that doesn't make you change motherboards (socket/super 7).

Not that it matters now, sadly.

ATi is a different story, pointless and ridiculous to add the two together historically.

SwissArmyDruid
Feb 14, 2014

by sebmojo

keyvin posted:

Its a recurring theme that when people ITT want to look at something positive we start talking about Intel.

And that's why the Pentium AE was such a genius masterstroke. It singlehandedly cuts the legs out from under AMD in the only remaining price/performance category that they had going for them.

Pimpmust
Oct 1, 2008

We can only hope Sieg and Son reaches out to AMD in the near future, I think they can leverage some great synergies between the two and take the whole business to the next level :thejoke:

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva
I mean, sure, the Intel/WY synthetics are better, but if you're on a budget the AMD-powered androids are still a great choice if you can handle the extra power usage and occasional radiation leak

GokieKS
Dec 15, 2012

Mostly Harmless.
The Pentium AE really is pretty ridiculous. Since the pump on my Glacer 240L started to develop an obnoxious buzzing sound, I've temporarily put the stock HSF back on, and it's been fine at the same OC (4.6 @ 1.275V). I really don't play much of anything other than D3, but it's not gone above 80C.

Pimpmust
Oct 1, 2008

cisco privilege posted:

I mean, sure, the Intel/WY synthetics are better, but if you're on a budget the AMD-powered androids are still a great choice if you can handle the extra power usage and occasional radiation leak

I guess we know why they called it the APOLLO system :downsrim:

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

keyvin posted:

Its a recurring theme that when people ITT want to look at something positive we start talking about Intel.

Oh, I dunno. I still love the 1090t machine I handed down to my parents. It even competes favorably against AMD's 2014 parts.

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva

Civil posted:

It even competes favorably against AMD's 2014 parts.
That's not actually a good thing, for either machine.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Civil posted:

Oh, I dunno. I still love the 1090t machine I handed down to my parents. It even competes favorably against AMD's 2014 parts.

That's not a good thing, and it's also the reason why I'm still on a 965 BE and 880FX.

SYSV Fanfic
Sep 9, 2003

by Pragmatica

SwissArmyDruid posted:

That's not a good thing, and it's also the reason why I'm still on a 965 BE and 880FX.

You sure its not because you are a techno-masochist?


GokieKS posted:

The Pentium AE really is pretty ridiculous. Since the pump on my Glacer 240L started to develop an obnoxious buzzing sound, I've temporarily put the stock HSF back on, and it's been fine at the same OC (4.6 @ 1.275V). I really don't play much of anything other than D3, but it's not gone above 80C.

It didn't run D3 at stock?

Ragingsheep
Nov 7, 2009

keyvin posted:

You sure its not because you are a techno-masochist?

I still use a 965BE because until now, I haven't come across any use case that required an upgrade.

GokieKS
Dec 15, 2012

Mostly Harmless.

keyvin posted:

It didn't run D3 at stock?

Not sure how you're drawing this conclusion from what I said. I bought the Micro Center G3258 + MSI Z97 combo specifically for OCing it while tiding me over until Haswell-E launch (still waiting for an Asus ROG X99 GENE motherboard). It did 4.6GHz easily with my Glacer 240L, and it's managed to keep that nearly 45% OC even with the stock HSF (though with better thermal compound).

Yaoi Gagarin
Feb 20, 2014

Ragingsheep posted:

I still use a 965BE because until now, I haven't come across any use case that required an upgrade.

I run a 955BE right now, for the same reason. I know that I would see a speedup in certain games, but it's not worth the money to me right now. And thankfully I've only had to compile the Linux kernel twice so far in my OS class...

SwissArmyDruid
Feb 14, 2014

by sebmojo

keyvin posted:

You sure its not because you are a techno-masochist?

My workload and the games I am currently playing have not yet demanded that I upgrade. I do, however, have a new machine budgeted for if and when Star Citizen comes out.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Well, this could either be absolutely brilliant, or complete folly for AMD. It's hard to tell at this point.

http://techreport.com/news/27259/cpu-startup-claims-to-achieve-3x-ipc-gains-with-visc-architecture

It should be noted that AMD is a major investor in this venture, as is Mubadala (the company that owns GloFo, who just bought IBM's chip unit, read: RISC business that they have leverage with which to shift to VISC)

If AMD can take this VISC architecture, and then integrate this with the already-existing HSA work they've done, yeah, they will completely obviate the need for things like OpenCL libraries, because the virtualized core's combined VISC/HSA middleware will ideally be composed of one or more CPU cores mixed with one or more GPU cores, and then just break out whatever appropriate work needs doing out to the GPU, all while presenting a single nondescript virtual core to applications for ease of programming.

This could also mean an obviation of the need for making applications more multithreaded as well.

Mad props for AMD if this was the end game all along. I'm excited.

SwissArmyDruid fucked around with this message at 22:25 on Nov 4, 2014

No Gravitas
Jun 12, 2013

by FactsAreUseless

SwissArmyDruid posted:

Well, this could either be absolutely brilliant, or complete folly for AMD. It's hard to tell at this point.

http://techreport.com/news/27259/cpu-startup-claims-to-achieve-3x-ipc-gains-with-visc-architecture

It should be noted that AMD is a major investor in this venture.

If AMD can take this VISC architecture, and then integrate this with the already-existing HSA work they've done, yeah, they will completely obviate the need for things like OpenCL libraries, because the virtualized core's combined VISC/HSA middleware will ideally be composed of one or more CPU cores mixed with one or more GPU cores, and then just break out whatever appropriate work needs doing out to the GPU, all while presenting a single nondescript virtual core to applications for ease of programming.

Mad props for AMD if this was the end game all along. I'm excited.

Sounds a lot like hyper-threading on CPU scale to me. I can imagine this making more threads runnable. I cannot see how this would give you better single-threaded performance, especially not with such a large speedup as is claimed. And single-thread is where the battle for the desktop is being won.

EDIT: Maybe it is like dynamic vectorization/recompilation and GPU offloading? Hello, Transmeta.

Rastor
Jun 2, 2001

No Gravitas posted:

Sounds a lot like hyper-threading on CPU scale to me.

EDIT: Maybe it is like dynamic vectorization/recompilation and GPU offloading? Hello, Transmeta.

Here is the Tom's Hardware writeup, FWIW.

No Gravitas
Jun 12, 2013

by FactsAreUseless
Yeah. Take something single threaded and make it run in parallel where possible.

Neat idea, even though I remain a skeptic.

I do note they measure the speedup in instructions per core per cycle. What is the clock speed then?

EDIT: Look at their pipeline.

11 stages. Out of this, the execute phase takes one stage, unless it is a long latency or memory operation in which case you have two stages. That's either a lot of work done in a stage or a very simple ISA where you need a lot of instructions to do something.

For this to run x86, this will run either very slowly clockwise, or will have enormously long pipelines when running fast.

EDIT2: Unless they manage to do their data accesses before they hit the execute stage, during dispatch or something. Might be possible, I guess.

No Gravitas fucked around with this message at 22:57 on Nov 4, 2014

Longinus00
Dec 29, 2005
Ur-Quan

No Gravitas posted:

Sounds a lot like hyper-threading on CPU scale to me. I can imagine this making more threads runnable. I cannot see how this would give you better single-threaded performance, especially not with such a large speedup as is claimed. And single-thread is where the battle for the desktop is being won.

EDIT: Maybe it is like dynamic vectorization/recompilation and GPU offloading? Hello, Transmeta.

The term you're looking for is Speculative multithreading. It's been around in academia for awhile now. It's basically the next step in terms of cpu speculative execution, from instruction to thread level.

Here's a fun blast from the past overview article about speculative multithreading from 2001. A few fun bits are the mini excerpts from Compaq and Cray.
http://www.ece.umd.edu/~manoj/759M/SpMT.pdf

Longinus00 fucked around with this message at 16:17 on Nov 5, 2014

BigPaddy
Jun 30, 2008

That night we performed the rite and opened the gate.
Halfway through, I went to fix us both a coke float.
By the time I got back, he'd gone insane.
Plus, he'd left the gate open and there was evil everywhere.


Today's poo poo that pisses me off is not work related. Banks releasing those Apps so you can take pictures of cheques to deposit them. My bank has one but surprise surprise it doesn't work so I still need to mail in my refund from Comcast like it is the 90's. Which leads on to if Comcast can take money from my account why can they not put it back in instead of sending me a cheque. Of course I know the answer and it is that they hope people lose/can't be arsed to cash the cheque.

JawnV6
Jul 4, 2004

So hot ...
Yeah there's a lot of guarded language around it. What stuck out to me was "licensing and co-developing," so they want to be what ARM is now without a proven product. I helped a research project as an undergrad that did speculation at the loop level, splitting out each iteration to a different core with enough messages to keep track of shared memory between them. There's plenty of research about similar things, but I'm skeptical without anything in a consumer's hand running realistic workloads.

Mobile's a great development just because there's an expectation of a recompile before moving something over. Might not be free, but there's no legacy binaries kicking around justifying anything.

$125M doesn't strike me as amazingly well funded either. Team of 500 for 2 years/200 for 5, assuming they'll eventually want to tape something out? Maybe you could pare it down to 50 architects and get huge discounts on masks and such, to get the luxury of a B0?

No Gravitas posted:

EDIT2: Unless they manage to do their data accesses before they hit the execute stage, during dispatch or something. Might be possible, I guess.
Can't always know what memory's necessary before the prior instruction's out of execute :v:

No Gravitas
Jun 12, 2013

by FactsAreUseless

Longinus00 posted:

The term you're looking for is Speculative multithreading.

Funny, I should have heard about it. I also adore the fact that the article has about five times as much space dedicated to citations than to content.

JawnV6 posted:

Can't always know what memory's necessary before the prior instruction's out of execute :v:

I know, I was just trying to come up with some way to make it work. I guess I'd really like this to succeed, but I just don't see how it could work in the end.

Menacer
Nov 25, 2000
Failed Sega Accessory Ahoy!

No Gravitas posted:

I do note they measure the speedup in instructions per core per cycle. What is the clock speed then?
350 MHz or so. This is likely one of the major reasons they're able to hit their IPC claims. It's easy to ramp up IPC when your DRAM is a dozen cycles away. It's also easier to run a very short pipeline when you don't need to hit high clock rates. I've discussed this before, but it's always fun to see the same problems come up again.

This is also why IPC measurements by themselves are meaningless and why Linley's article is a bad PR puff piece. Figure 4 is especially ridiculous in light of their design running at such frequencies. He gives a bit of a handwave to the problems I mentioned here, but then goes on to just parrot their press release (of course, the fact that the press release was at Linley's own conference might have something to do with the light touch he gave them.)

The major thing they're selling, the "virtual cores" appears at this stage to be nothing more than a clustered OoO back-end. A prominent example of this type of design shipped over 15 years ago. (Seriously, compare slide 4 of this talk to Figure 2 in the MPR writeup). Every wide OoO design you can find these days implements a clustered OoO engine because it lets you run faster and at lower power: fewer ports in the register files, simpler reservation stations, simpler dispatch logic, etc. They might also be sharing the backend between multiple threads. They're not exactly the first to implement SMT.

Their current design (which, again, runs at ~350 MHz), is 2 clusters. The big lie-by-omission here is that they're implying that their design will easily scale up (e.g. see slide 11 of their Linley PC talk). They expect not only that their frequency will improve, but that they can easily and continually scale the number of back-end stages of their pipeline need to talk to one another. I'll believe it when I see it, as their claims that "this is mostly logistics" are naive or disingenuous.

This is all very likely a desperate media blitz to pick up investors. The EE Times writeup pretty much says this pretty plainly.

quote:

[Lingareddy, CEO of Soft Machines] aims to close a "huge" Series C funding round to finance the startup for next three years.

They've carefully put together marketing materials, cherry-picked benchmarks, shown enticing numbers with no underlying data or technical explanation, and hit all the right news sources in order to advertise their company without showing that they're behind on their designs (because building hardware is hard). As Jawn mentioned, they're talking about licensing. This is probably because that's the only way they can make money without burning through another 7 years trying to make a chip that works. Show off a couple of prototypes and hopefully get licensed or bought out.

Now their actual novel inventions may be that the front-end of their pipeline does dynamic binary translation (akin to Transmeta or NV's Denver, which employed a boatload of Transmeta people). They may be able to use that to reduce the average amount of interconnect you need between your clusters -- if you can dynamically find medium-sized chunks of independent code, then you may be able to get away with having minimal interconnect across your clusters and make it easier to scale up. The fact that they're not showing any real data about this, to me, implies that they're finding this very difficult to accomplish, either in hardware/firmware or for general-purpose codes. This might be why they see good relative performance on libquantum, for example. This benchmark is easy to auto-parallelize. If their hardware frontend is doing this (and maybe vectorization as well) but they're only using one core for the other processors (even though a simple GCC pass could create the same type of parallel code and vectorize the data-parallel portions), then they'll show better performance while not really showing any data.

Overall, they may have a good idea in there somewhere. But the data they're showing now reeks of desperation. They're coming out of stealth mode because the money ran out before they could get their hardware to perform. The numbers they're showing are pure fluff, and they're being very cagey about what, exactly, their hardware is supposed to be doing. If it's what I've claimed here (binary translation doing auto-parallelization), then I'm skeptical that it will ever work in the general case. If their big result is actually the numbers they're currently showing, then this is all a bunch of hogwash.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Thanks, Menacer! I can always count on you to cut through the fluff and crap and zoom in on what's really important!

Menacer
Nov 25, 2000
Failed Sega Accessory Ahoy!

JawnV6 posted:

I want a smart agent that can arbitrarily modify any cache line that passes it with a 1 cycle penalty. I want two implementations of the same ISA that can transfer uarch state through a sideband. I want to expose that knob to my compiler instead of hiding things behind a DVFS table hack with thousands of cycles to shuffle things over. I want my DDR controller to support atomic operations so that my cards can set flags without 8 round trips.
I know this is from like a month ago, but sorry for not even responding to this part. With respect to fast switching between multiple core implementations of the same ISA, you might be interested in this paper. They show major energy savings by quickly hopping between two heterogeneous back-ends, rather than requiring a world-switch between the cores. Check the google scholar citations if you want to see where other academics are taking the idea.

As for your memory controller doing your atomic operations for you, I believe that Micron's Hybrid Memory Cube can do this (See Section 9.10.3). The memory controller sits at the bottom of a 3D stack (cores talk with it over a packetized connection, rather than doing all of the DRAM timing, etc. themselves). You can have that memory controller do some interesting things with a single command.

Wistful of Dollars
Aug 25, 2009

When the day comes to replace my 3570k AMD will have something worth buying, right guys?

Right?

:smith:

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Well, if the Zen uarch is worth anything, than maybe actually! If.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Factory Factory posted:

Well, if the Zen uarch is worth anything, than maybe actually! If.

If... Intel stops all development in the meantime.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

El Scotch posted:

When the day comes to replace my 3570k AMD will have something worth buying, right guys?

Right?

:smith:

SwissArmyDruid
Feb 14, 2014

by sebmojo
So, imagine my surprise when I heard that it was AMD's HBM projects that they were working on with Hynix that actually panned out, as opposed to Nvidia's HMC.

Now, let's face it, it's new technology. It's not going to be cheap. Nvidia may have even announced that they are even going to use AMD/Hynix's HBM, but they're going to be a year behind in getting these products out the door. Furthermore, like all new technologies the initial rollout is going to be expensive.

But, 12 months down the line, when Nvidia starts migrating to stacked memory, we could see AMD APUs with a single stack of HBM on the package as dedicated graphics memory for use in low-end non-enthusiast devices. I'm even thinking that eventually, this makes its way into consoles via AMD's semi-custom silicon business, since they just won another contract with Nintendo to provide the hardware for entire ecosystem.

Discuss.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

SwissArmyDruid posted:

AMD's semi-custom silicon business, since they just won another contract with Nintendo to provide the hardware for entire ecosystem.

Oh, nice, didn't hear about this.

It would be amusing if Nintendo, with its forgotten and little-loved Wii U, suddenly had the most powerful of the 3 machines. Maybe they could include a controller designed for human hands with a good battery life!

Bloody Antlers
Mar 27, 2010

by Jeffrey of YOSPOS
One of my favorite things to daydream about is imagining what the best AMD engineers could come up with if they had the same funding and process nodes as Intel.

Or reverse that situation and take the amount of Intel engineers you could pay with AMD's salary limitations and give them GF or TSMC (when it was struggling with 32nm) to design for with a similar R&D budget.

Given how incredibly mismanaged AMD was under the former CEO and how comparatively little cash they've had for R&D, you just have to KNOW there are some bad assed engineers on board to have gotten AMD to a point where they briefly delivered a superior platform vs Intel and managed to stay alive this long after all the anti-competitive practices Intel used against them.

It reminds me of the space race in a way.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Bloody Antlers posted:

One of my favorite things to daydream about is imagining what the best AMD engineers could come up with if they had the same funding and process nodes as Intel.

Or reverse that situation and take the amount of Intel engineers you could pay with AMD's salary limitations and give them GF or TSMC (when it was struggling with 32nm) to design for with a similar R&D budget.

Given how incredibly mismanaged AMD was under the former CEO and how comparatively little cash they've had for R&D, you just have to KNOW there are some bad assed engineers on board to have gotten AMD to a point where they briefly delivered a superior platform vs Intel and managed to stay alive this long after all the anti-competitive practices Intel used against them.

It reminds me of the space race in a way.

Everybody loves an underdog.

thebigcow
Jan 3, 2001

Bully!
They hired a bunch of guys from DEC that worked on the Alpha right around the time Compaq bought what was left of DEC. Those were the people that made the Athlon.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

El Scotch posted:

When the day comes to replace my 3570k AMD will have something worth buying, right guys?

Right?

:smith:

The Kabini can be worth buying, in certain circumstances. If you want a low-power processor that'll do AES-NI, you don't have a ton of options. The Athlon 5350 is an OK laptop-level processor.

High-power stuff (>50W)? Nah, buy a Pentium AE or an i5/i7.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

thebigcow posted:

They hired a bunch of guys from DEC that worked on the Alpha right around the time Compaq bought what was left of DEC. Those were the people that made the Athlon.


That's what My Father From DEC has always claimed. "The Athlon XP/64 was just the practical commercialization of the Alpha architecture". Good to hear it's not just DadTales.

a survivor who works for HP now

Paul MaudDib fucked around with this message at 09:36 on Dec 25, 2014

  • Locked thread