Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
penus penus penus
Nov 9, 2014

by piss__donald

Zero VGS posted:

I'm pretty sure it's all "high margin"... they're all like $30 to make or something, most of the cost being the VRAM I think.

Is there a source for this? Most of the time if something is high margin it becomes evident when a company is under pressure and then they can really slash prices, which I would have imagined AMD doing heavily this last year if they could. Although I imagine the real cost of a GPU would be fairly transparent considering everything is sourced too

edit: On second thought, im sure the real cost is R&D and the actual machinery versus the literal wafers and pcbs themselves, so perhaps they really do cost like $30 to make lol (literally speaking)

penus penus penus fucked around with this message at 22:44 on Dec 18, 2015

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo
Used to be that you could look up DRAM spot pricing on google.

Seems like Micron, at least, is selling DRAM on average $.67 per gigabit.

Maybe we'll see a jump when they start flogging GDDR5X to the OEMs, though.

penus penus penus
Nov 9, 2014

by piss__donald

SwissArmyDruid posted:

Used to be that you could look up DRAM spot pricing on google.

Seems like Micron, at least, is selling DRAM on average $.67 per gigabit.

Maybe we'll see a jump when they start flogging GDDR5X to the OEMs, though.

On that I've heard GDDR5 is very expensive relative to normal ram, but I can't remember on what scale.

edit: I guess this is something

penus penus penus fucked around with this message at 23:05 on Dec 18, 2015

SlayVus
Jul 10, 2009
Grimey Drawer

THE DOG HOUSE posted:

On that I've heard GDDR5 is very expensive relative to normal ram, but I can't remember on what scale.

edit: I guess this is something



The pricing on the GPU in inaccurate. All GF110 parts are the same GPU with disabled units.

The GTX 560 Ti 448 Core, GTX 570, GTX 580, and GTX 590 all use a GF110 processor. The 560 had two SM units disabled, where as the 570 has one SM unit disabled. The 590 uses lower clocked full sized processors. That means the manufacturing costs of all those cards should be the same for the GPU.

FSMC
Apr 27, 2003
I love to live this lie

SlayVus posted:

The pricing on the GPU in inaccurate. All GF110 parts are the same GPU with disabled units.

The GTX 560 Ti 448 Core, GTX 570, GTX 580, and GTX 590 all use a GF110 processor. The 560 had two SM units disabled, where as the 570 has one SM unit disabled. The 590 uses lower clocked full sized processors. That means the manufacturing costs of all those cards should be the same for the GPU.

Nvidia would disable the SM units and sell the chips to the card manufacturers for different prices.

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

THE DOG HOUSE posted:

edit: On second thought, im sure the real cost is R&D and the actual machinery versus the literal wafers and pcbs themselves, so perhaps they really do cost like $30 to make lol (literally speaking)

And of course most of the cost of the wafers is the R&D of the process node. It's R&D all the way down.

SlayVus posted:

The pricing on the GPU in inaccurate. All GF110 parts are the same GPU with disabled units.

The GTX 560 Ti 448 Core, GTX 570, GTX 580, and GTX 590 all use a GF110 processor. The 560 had two SM units disabled, where as the 570 has one SM unit disabled. The 590 uses lower clocked full sized processors. That means the manufacturing costs of all those cards should be the same for the GPU.

Maybe they're accounting for parametric yields. I'm a little bit suprised by what that would imply for them though.

^^ Ah, right of course, it's at the OEM level so that's abstracted away by the magic of market segmentation.

SwissArmyDruid
Feb 14, 2014

by sebmojo

THE DOG HOUSE posted:

On that I've heard GDDR5 is very expensive relative to normal ram, but I can't remember on what scale.

edit: I guess this is something



http://www.trefis.com/stock/mu/model/trefis?easyAccessToken=PROVIDER_272e0f736dff46940dc10274fd1458aff0e454eb&from=widget:forecast

That's the best I've got ATM. Click on the tab that says "DRAM prices continue to fall" and over on the right side, click on "Micron's Core DRAM Average Selling Price per Gb".

Police Automaton
Mar 17, 2009
"You are standing in a thread. Someone has made an insightful post."
LOOK AT insightful post
"It's a pretty good post."
HATE post
"I don't understand"
SHIT ON post
"You shit on the post. Why."

SwissArmyDruid posted:

I've heard it's possible, I've not yet done it myself. 2016 is the year I build a new box (Skull Canyon if it's got iGPU, Skylake if not, Arctic Islands either way) from scratch and move over fully to Linux and use hardware passthrough to VM Windows into its own little box where it can run my one or two remaining Windows-only apps and can't hurt us. And games. :rolleyes:

My chosen solution involving Qemu says that the driver looks for KVM extensions, and then self-terminates if it detects them. qEMU has flags you can use to hide those extensions to the driver, although it seems that they also look for Hyper-V as well.

Qemu works around it, but apparently this costs real performance under windows and may subject you to CLOCK_WATCHDOG_TIMEOUT bluescreens.

Quote from Nvidia:

"We fixed some hypervisor detection code that was breaking Hyper-V. It's possible that fix may be preventing GeForce cards from working in passthrough, but because it is not officially supported for GeForce cards, this will not be fixed."

https://forums.geforce.com/default/...232923/#4232923

I'm planning a build like that right now, I got very interested in the i5820k with x99 vs. Skylake because of the six cores - the downside is that it doesn't have any integrated graphics whatsoever so I would need a second, preferably low-powered and passively cooled card that can do the general stuff in Linux (video acceleration, desktop use) you don't need a $300 graphics card for. Anybody have any good advice on such a low-cost card which would be able to do a 2560x1440 / 1280x1024 screen setup even with compositing window manager without starting to chug? I'm just weighting my options right now, I might go with Skylake anyways to save on power, it wouldn't exactly be an energy efficient build methinks.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Police Automaton posted:

I'm planning a build like that right now, I got very interested in the i5820k with x99 vs. Skylake because of the six cores - the downside is that it doesn't have any integrated graphics whatsoever so I would need a second, preferably low-powered and passively cooled card that can do the general stuff in Linux (video acceleration, desktop use) you don't need a $300 graphics card for. Anybody have any good advice on such a low-cost card which would be able to do a 2560x1440 / 1280x1024 screen setup even with compositing window manager without starting to chug? I'm just weighting my options right now, I might go with Skylake anyways to save on power, it wouldn't exactly be an energy efficient build methinks.

nVidia option:
http://www.newegg.com/Product/Product.aspx?Item=N82E16814121980

AMD option:
http://www.newegg.com/Product/Product.aspx?Item=N82E16814131678

The AMD option has 4GB buffer. Powercolor evidently has lovely support, though.

SlayVus
Jul 10, 2009
Grimey Drawer
You can get a 5450 new from Newegg for $30. Direct X 11 compliant. Should be fast enough for any regular desktop work.

Edit I may not know what the actual requirements would be for what you want, but I am finding options that I think would work for cheaper than those /\/\.

Found an HD 6950 for $80 on Newegg. 2GB if vram should be sufficient. You might get better options used though from like [H] and other places.

SlayVus fucked around with this message at 01:18 on Dec 19, 2015

Police Automaton
Mar 17, 2009
"You are standing in a thread. Someone has made an insightful post."
LOOK AT insightful post
"It's a pretty good post."
HATE post
"I don't understand"
SHIT ON post
"You shit on the post. Why."

SlayVus posted:

You can get a 5450 new from Newegg for $30. Direct X 11 compliant. Should be fast enough for any regular desktop work.

Edit I may not know what the actual requirements would be for what you want, but I am finding options that I think would work for cheaper than those /\/\.

Found an HD 6950 for $80 on Newegg. 2GB if vram should be sufficient. You might get better options used though from like [H] and other places.

Price advice doesn't really work because I'm over in germany, but the HD 5450 seems like exactly what I was looking for, in the ideal price bracket. When the Windows VM isn't running it would still be possible to use the other R9 390 for Linux too, without a reboot. Thanks!

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS
I was gonna repaste a gpu and cpu today and mine was all dried out. Someone posted the best non metallic one here awhile back. What was it? Browsing various sites seems to indicate that its "Thermal Grizzly Kryonaut"

Fauxtool fucked around with this message at 09:22 on Dec 19, 2015

LiquidRain
May 21, 2007

Watch the madness!

Last I checked a few months ago it was Gelid, Extreme.

Peteyfoot
Nov 24, 2007
What's on the horizon for Intel's integrated graphics? I have a Broadwell laptop that I'm very happy with right now for basic gaming (mostly 2D) and am wondering what the next big jump will be.

Truga
May 4, 2014
Lipstick Apathy

LiquidRain posted:

Last I checked a few months ago it was Gelid, Extreme.

It was this, yes.

Thermal Grizzly Kryonaut is some new thing and it's quite a bit better than the Gelid thing. It's new though, so there's no data yet on how it performs on the long run. I like my Gelid GC Extreme in large part because it doesn't need to be reapplied for a long long time if you're not doing something silly. Plus, it's half the price of that Grizzly one.

Subjunctive posted:

Spoken like someone who has never had a printer.

Triggered.

SwissArmyDruid
Feb 14, 2014

by sebmojo

terre packet posted:

What's on the horizon for Intel's integrated graphics? I have a Broadwell laptop that I'm very happy with right now for basic gaming (mostly 2D) and am wondering what the next big jump will be.

Intel has embraced AdaptiveSync. They've said that it will be making its way into their iGPUs in the future. That, combined with AMD's announcement of future support for Freesync over HDMI, and consequently trying to get it rolled into a VESA standard means that Nvidia has basically lost the variable refresh war. G-sync, in its current implementation, will never be a mainstream product, only an enthusiast one.

Putting eDRAM onto chips is still expensive per gigabit. Sure, it can double as L4 cache, but with the limited amounts that Intel can put onto their products, you're still using system memory for your iGPU past a certain point. I look forward to seeing if they leverage the AMD patent for putting CPUs onto HBM substrates, a la future hypothetical Zen APUs, or if they come up with some other way to improve their iGPU performance. Cuz HBM2 starts at 8 Gigabits (1 gigabyte) per layer, up to 8 layers, and Intel puts at most 2 gigabits on their Iris Pro products. (256 MB) And we already know about how much faster HBM is supposed to be. (I say supposed to, because we won't really know what AMD has been trying to do until we get the 14nm GCN chip that we were supposed to have, not one whose parts they had to hack away at to get it to physically fit because 28nm lithographed transistors were too large to let it fit onto an HBM substrate, all because TSMC couldn't get their poo poo together to do 20nm.)

Intel moving to DDR4 has the side benefit of increasing iGPU memory bandwidth, if only by sheer virtue of being faster. But you can only do so many compression tricks before you hit a wall. It remains to be seen if Intel is content with "good enough" or if they want to move the goalposts on "good enough" into dGPU territory.

SwissArmyDruid fucked around with this message at 11:10 on Dec 19, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

SwissArmyDruid posted:

Intel has embraced AdaptiveSync. They've said that it will be making its way into their iGPUs in the future. That, combined with AMD's announcement of future support for Freesync over HDMI, and consequently trying to get it rolled into a VESA standard means that Nvidia has basically lost the variable refresh war. G-sync, in its current implementation, will never be a mainstream product, only an enthusiast one.

Putting eDRAM onto chips is still expensive per gigabit. Sure, it can double as L4 cache, but with the limited amounts that Intel can put onto their products, you're still using system memory for your iGPU. I look forward to seeing if they leverage the AMD patent for putting CPUs onto HBM substrates, a la future hypothetical Zen APUs, or if they come up with some other way to improve their iGPU performance. Cuz HBM2 starts at 8 Gigabits (1 gigabyte) per stack, and Intel puts at most 2 gigabits on their Iris Pro products. (256 MB) And we already know about how much faster HBM is supposed to be. (I say supposed to, because we won't really know what AMD has been trying to do until we get the 20nm/14nm GCN chip that we were supposed to have, not one that is physically size constrained by 28nm transistor size.)

Intel moving to DDR4 has the side benefit of increasing iGPU memory bandwidth, if only by sheer virtue of being faster. But you can only do so many compression tricks before you hit a wall. It remains to be seen if Intel is content with "good enough" or if they want to move the goalposts on "good enough" into dGPU territory.

Intel will have to adopt HBM, it's a given. If they don't Zen APUs will overtake them in a variety of niches leaving basically only "enthusiast" intel (all subject to pricing, but thinking on absolute performance scale). However, dGPU is laughably unlikely and any sign of Intel doing this would be telegraphed to hell and back and be OEM only products. You might as well ask why IMT doesn't get back into the dGPU game considering PowerVR and Iris are comparable. In fact I'm not sure either company can effectively scale their designs to current comparable dGPU on the high end.

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

Intel will have to adopt HBM, it's a given. If they don't Zen APUs will overtake them in a variety of niches leaving basically only "enthusiast" intel (all subject to pricing, but thinking on absolute performance scale). However, dGPU is laughably unlikely and any sign of Intel doing this would be telegraphed by OEM only products. You might as well ask why IMT doesn't get back into the dGPU game considering PowerVR and Iris are comparable.

You don't need to push into $150 dollar GPU range to start encroaching on dGPUs. Heck, maybe they're already there.

For example, I, personally, would never build a computer with less than a 750 Ti or an R7 260X/360. It's a point of principle for me. I feel any lower than the roughly $400 you'd spend on a machine around that spec, you might as well just get a NUC. My point is there's still a lot of room from those cards on down.

Imagine if Intel could come close to an R5 230. For the mainstream consumer, that card is probably going to be Good Enough for everything they do. Imagine what that hypothetical iGPU would do to say, Nvidia 920M, 930M and maybe 940M revenue. It's not *that* much of a stretch. But as I have said, it remains to be seen if Intel is going to try and push their boundaries.

As for Intel adopting HBM: Maybe. But is there anyone out there that can meet Intel's demand, or are they going to have to start rolling their own? Sure, they partner with Micron for the flash in their SSDs, but I can't see Hynix diverting production to Intel.

I have equally-hazy prognostications on a joint Intel-Samsung partnership. (Samsung having committed to supplying Nvidia with their HBM2 for Pascal.)

SwissArmyDruid fucked around with this message at 11:32 on Dec 19, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

SwissArmyDruid posted:

You don't need to push into $150 dollar GPU range to start encroaching on dGPUs.

For example, I, personally, would never build a computer with less than a 750 Ti or an R7 260X/360. It's a point of principle for me. But there's still a lot of room from those cards on down.

Imagine if Intel could come close to an R5 230. For the mainstream consumer, that card is probably going to be Good Enough for everything they do. Imagine what that hypothetical iGPU would do to say, Nvidia 920M, 930M and maybe 940M revenue. It's not *that* much of a stretch. But as I have said, it remains to be seen if Intel is going to try and push their boundaries.

As for Intel adopting HBM: Maybe. But is there anyone out there that can meet Intel's demand, or are they going to have to start rolling their own? Sure, they partner with Micron for the flash in their SSDs, but I can't see Hynix diverting production to Intel.

I have equally-hazy prognostications on a joint Intel-Samsung partnership. (Samsung having committed to supplying Nvidia with their HBM2 for Pascal.)

Intel already has iGPUs beating R5 230s IIRC, the Intel HD4600/5200 are equal. R5 230s exist essentially for very old or underpowered machines, I mean it had a purpose when it was released as the HD 6450 but now? R7 240 is where you want to aim, and PowerVR GT7900 is nearly there (comparable to GT 730M).

Fair enough, Intel doesn't seem interested in HBM, no one could supply their volume demands, and with the huge upfront cost of getting in that game it simply wouldn't be worth it unless Intel can expand it to other products; they don't seem the type to sell to potential competition.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

FaustianQ posted:

Fair enough, Intel doesn't seem interested in HBM, no one could supply their volume demands, and with the huge upfront cost of getting in that game it simply wouldn't be worth it unless Intel can expand it to other products; they don't seem the type to sell to potential competition.

Intel actually has their own version of HBM called MCDRAM. It's currently only being used in their new Xeon Phi chips but once they work out the kinks I could see it being implemented in their other chips, probably whatever comes after Cannonlake.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."
I need some help overclocking my Asus strix 980 ti (non OC version). Its stock clock was 1076 boosted and I'm not sure about the memory readings. Asus GPU-Z reads 7010MHz memory clock but both GPU-Z and HWinfo show memory as 1753MHz. What's going on there?

What is a safe target to aim for? From the reddit posts on the 980 ti I'm seeing about 1450-1500 MHz clock for the core and 8000MHz for the memory with 1.2V being good on water cooling.

At the moment I'm testing in OCCT at a mild overclock of 1320MHz core with the memory still showing as 7010 in GPU tweak and 1753 in GPU-z. My temperature is only registering at 30C which sounds absurdly low but I can see it jump from 25 idle to 30 under load so I'm not sure what gives.

E: I see that I'm supposed to divide memory clock by 4 to see the frequency in GPU-Z, but I don't understand why.

E2: My voltage is 1163mV with a power target of 110%, which I expect to mean that max voltage would be 1279mV, but I'm reading exactly 1200mV. Is this the voltage locking I've heard about?

Richard M Nixon fucked around with this message at 19:29 on Dec 19, 2015

SlayVus
Jul 10, 2009
Grimey Drawer

LiquidRain posted:

Last I checked a few months ago it was Gelid, Extreme.

Confirming this again. Gelid Extreme in an X pattern is the best application method.

penus penus penus
Nov 9, 2014

by piss__donald

Richard M Nixon posted:

I need some help overclocking my Asus strix 980 ti (non OC version). Its stock clock was 1076 boosted and I'm not sure about the memory readings. Asus GPU-Z reads 7010MHz memory clock but both GPU-Z and HWinfo show memory as 1753MHz. What's going on there?

What is a safe target to aim for? From the reddit posts on the 980 ti I'm seeing about 1450-1500 MHz clock for the core and 8000MHz for the memory with 1.2V being good on water cooling.

At the moment I'm testing in OCCT at a mild overclock of 1320MHz core with the memory still showing as 7010 in GPU tweak and 1753 in GPU-z. My temperature is only registering at 30C which sounds absurdly low but I can see it jump from 25 idle to 30 under load so I'm not sure what gives.

E: I see that I'm supposed to divide memory clock by 4 to see the frequency in GPU-Z, but I don't understand why.

E2: My voltage is 1163mV with a power target of 110%, which I expect to mean that max voltage would be 1279mV, but I'm reading exactly 1200mV. Is this the voltage locking I've heard about?

The memory thing is accurate. The "effective" rate is the 7000-8000 mhz number you see but the true rate is the lower number. DDR means dual data rate... and for some reason GDDR5 gets doubled again - and that's about the extent of my knowledge on that :v:.

However the largest number, the 4x multiplied one that generally starts at around 7000mhz, is the most normalized figure and the one to go off of. Very annoyingly, Afterburner reports this figure at just twice the clockspeed as opposed to four times. So the number you see there you need to multiply by 2, and that applies to the offsets you apply (so +200 really means +400mhz effective). So in afterburner if you're showing 4000 mhz, you are really running at 8000 mhz.

1400-1500+~ is a good OC for these cards for core clock.

Around 8 ghz is a good memory OC.

Do core overclocking completely separately from memory overclocking, although it seems you are already doing so.

I believe maxwell voltage hard limits are 1.280volts. If you are using afterburner go to settings and select the "extended MSI voltage" profile.

Lately I've been using 3dMark and Heaven 4.0 for maxing out my values, vs OCCT (or furmark and anything else). I'd use one of those programs (both free) to really verify that its not asking for all the voltage it requires.

Edit: I'm almost jealous. With a 1070mhz factory speed you're going to see one of the more dramatic differences after OC for one of these cards.

penus penus penus fucked around with this message at 20:45 on Dec 19, 2015

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."

THE DOG HOUSE posted:

The memory thing is accurate. The "effective" rate is the 7000-8000 mhz number you see but the true rate is the lower number. DDR means dual data rate... and for some reason GDDR5 gets doubled again - and that's about the extent of my knowledge on that :v:.

However the largest number, the 4x multiplied one that generally starts at around 7000mhz, is the most normalized figure and the one to go off of. Very annoyingly, Afterburner reports this figure at just twice the clockspeed as opposed to four times. So the number you see there you need to multiply by 2, and that applies to the offsets you apply (so +200 really means +400mhz effective). So in afterburner if you're showing 4000 mhz, you are really running at 8000 mhz.

1400-1500+~ is a good OC for these cards for core clock.

Around 8 ghz is a good memory OC.

Do core overclocking completely separately from memory overclocking, although it seems you are already doing so.

I believe maxwell voltage hard limits are 1.280volts. If you are using afterburner go to settings and select the "extended MSI voltage" profile.

Lately I've been using 3dMark and Heaven 4.0 for maxing out my values, vs OCCT (or furmark and anything else). I'd use one of those programs (both free) to really verify that its not asking for all the voltage it requires.

Edit: I'm almost jealous. With a 1070mhz factory speed you're going to see one of the more dramatic differences after OC for one of these cards.

Thanks for the info. I still seem to be locked at 1.2v but I'm using the Asus tool. I'll try afterburner and see if I can pass the voltage limit then.

I have tried using OCCT for a brief spot test and then running the FF14 Heavensward benchmark to stress test. It's extremely tempermental and crashes at clock speeds over 1450/7020 core/memory.

I still don't go over 30*c for temperatures, which worries me. Something must not be reading right, I imagine.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Richard M Nixon posted:

Thanks for the info. I still seem to be locked at 1.2v but I'm using the Asus tool. I'll try afterburner and see if I can pass the voltage limit then.

I have tried using OCCT for a brief spot test and then running the FF14 Heavensward benchmark to stress test. It's extremely tempermental and crashes at clock speeds over 1450/7020 core/memory.

I still don't go over 30*c for temperatures, which worries me. Something must not be reading right, I imagine.

1.2v is the BIOS locked limit for many cards. You can probably find an edited BIOS to flash if you really want to pass that, but generally it doesn't really get you much.

Power/voltage targets are kinda odd with Maxwell--maxing them out right away is not always the best option for getting a maximal OC. Start with a power target of 105% or so, and no voltage offset, and no memory offset. Then clock up the core until it fails. Push the power target up and see if that helps, if it does, continue pushing the core. Only up voltage once you're already at 110%+ power and failing. Once you figure out your maximum core, then start inching up your memory. Maxing out voltage first often ends up resulting in LOWER OCs.

30C isn't too odd if you're on a decent watercooling loop or even an oversized AIO and aren't really pushing it that hard, but I'd expect more in the mid 40's on something like 3dMark.

Richard M Nixon
Apr 26, 2009

"The greatest honor history can bestow is the title of peacemaker."

DrDork posted:

1.2v is the BIOS locked limit for many cards. You can probably find an edited BIOS to flash if you really want to pass that, but generally it doesn't really get you much.

Power/voltage targets are kinda odd with Maxwell--maxing them out right away is not always the best option for getting a maximal OC. Start with a power target of 105% or so, and no voltage offset, and no memory offset. Then clock up the core until it fails. Push the power target up and see if that helps, if it does, continue pushing the core. Only up voltage once you're already at 110%+ power and failing. Once you figure out your maximum core, then start inching up your memory. Maxing out voltage first often ends up resulting in LOWER OCs.

30C isn't too odd if you're on a decent watercooling loop or even an oversized AIO and aren't really pushing it that hard, but I'd expect more in the mid 40's on something like 3dMark.

I managed to also change the max voltage on asus tweak and now I'm at 1.218v which seems normal to be the cap.

For some reason, when I tried using afterburner instead I kept seeing my clock get stuck at 500-600MHz when running OCCT. I'm not sure what was going on there - If I cleared settings and put the same OC thresholds into asus tweak it would clock normally (as read by hwmonitor).

I'm settling in to being able to pass the Heavensward benchmark at 1450 / 7400 clocks, but my score is about half of what it was at defaults. I don't understand at all why I would get a lower score with a higher clock rate. Does that mean I'd actually be better off running stock speeds or is the benchmark just weird in how it reports? Like I said, it's known to crash very easily in OC'd machines so it could just be that.

I need to install Firestrike and start running it for final stress testing. Still not cracking 30* but I guess that's just a sign that I built a good custom loop.

Durinia
Sep 26, 2014

The Mad Computer Scientist
Incoming multi-comment memory tech post!

For those trying to price out GDDR5, the rule of thumb is that it's roughly 30-ish% more expensive per bit than DDR.

SwissArmyDruid posted:

...

As for Intel adopting HBM: Maybe. But is there anyone out there that can meet Intel's demand, or are they going to have to start rolling their own? Sure, they partner with Micron for the flash in their SSDs, but I can't see Hynix diverting production to Intel.

I have equally-hazy prognostications on a joint Intel-Samsung partnership. (Samsung having committed to supplying Nvidia with their HBM2 for Pascal.)

The goal of NVIDIA/Hynix/AMD bringing HBM to JEDEC to be standardized was so there could be multiple sources for everyone to use, and that helps create a reliable supply base. Samsung as a company made half of the DRAM on earth last year. HBM2 volumes will be minuscule, comparatively, meaning they could supply as much as needed - they just need advance notice to schedule the fab capacity. This is doubly true with HBM2 likely being a pretty high margin part in the near term.

Krailor posted:

Intel actually has their own version of HBM called MCDRAM. It's currently only being used in their new Xeon Phi chips but once they work out the kinks I could see it being implemented in their other chips, probably whatever comes after Cannonlake.

MCDRAM is a short-term proprietary Micron solution that Intel needed to make due to the related processor (Xeon Phi/KNL) having a schedule that lands in advance of full HBM2 production. You wouldn't want it on anything but Xeon Phi as the latency is massive compared to DDR. It would also have even worse supply base issues as Micron is the only one who can make it, for multiple reasons.

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

Richard M Nixon posted:

I managed to also change the max voltage on asus tweak and now I'm at 1.218v which seems normal to be the cap.

For some reason, when I tried using afterburner instead I kept seeing my clock get stuck at 500-600MHz when running OCCT. I'm not sure what was going on there - If I cleared settings and put the same OC thresholds into asus tweak it would clock normally (as read by hwmonitor).

I'm settling in to being able to pass the Heavensward benchmark at 1450 / 7400 clocks, but my score is about half of what it was at defaults. I don't understand at all why I would get a lower score with a higher clock rate. Does that mean I'd actually be better off running stock speeds or is the benchmark just weird in how it reports? Like I said, it's known to crash very easily in OC'd machines so it could just be that.

I need to install Firestrike and start running it for final stress testing. Still not cracking 30* but I guess that's just a sign that I built a good custom loop.

If you're doing overclocking tests and you get a driver crash-then-restart, you have to reboot to get back to full clockspeed. After a crash, I believe the GPU defaults to a very low speed, so if you continue testing without a reboot, all your benchmarks will be very low.

SlayVus
Jul 10, 2009
Grimey Drawer

JnnyThndrs posted:

If you're doing overclocking tests and you get a driver crash-then-restart, you have to reboot to get back to full clockspeed. After a crash, I believe the GPU defaults to a very low speed, so if you continue testing without a reboot, all your benchmarks will be very low.

Which is weird because it wasn't like that until maybe the 900 series came out.

Kazinsal
Dec 13, 2011
I'm currently running my R9 290 at 1100 MHz core, 1375 MHz memory. If I drop the memory down to, say, 1300, would it be feasible to up the core without increasing temperatures too much? I currently don't have a voltage bump on it, but my power limit is maxed.

Reference board, non-reference cooler (XFX R9 290 DD).

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Kazinsal posted:

I'm currently running my R9 290 at 1100 MHz core, 1375 MHz memory. If I drop the memory down to, say, 1300, would it be feasible to up the core without increasing temperatures too much? I currently don't have a voltage bump on it, but my power limit is maxed.

Reference board, non-reference cooler (XFX R9 290 DD).
You can try, but it's unlikely to make much of a difference. Most 290's tap out around 1100-1150 core without a voltage bump, regardless of what you're doing with the memory.

Odette
Mar 19, 2011

Not sure if I've asked this, but since my 980Ti + 144Hz monitor are using up excessive power until NVIDIA fix their >120Hz poo poo ...

Is there any way I can automagically set desktop to 120Hz and everything else to 144Hz?

Slider
Jun 6, 2004

POINTS
I've heard you can use nvidiainspector "multi display power saver" tool for that but when I did it it crashed my PC.

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
Serious question: is there enough difference between 120 and 144 that worrying about that is worth it?

Generic Monk
Oct 31, 2011

Dogen posted:

Serious question: is there enough difference between 120 and 144 that worrying about that is worth it?

no

SlayVus
Jul 10, 2009
Grimey Drawer

Dogen posted:

Serious question: is there enough difference between 120 and 144 that worrying about that is worth it?

Differentiating between 120 and 144 is harder since you're talking only a 20% increase. The 60 to 120 is easier just because the increase is just so large.

It would be akin to going from a 480 to a 980. The performance gain is large to notice it. Going from a 970 to a 980 is harder. You get better average fps, but it's not the difference between not being able to play a game.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Odette posted:

Not sure if I've asked this, but since my 980Ti + 144Hz monitor are using up excessive power until NVIDIA fix their >120Hz poo poo ...

Is there any way I can automagically set desktop to 120Hz and everything else to 144Hz?
There are a couple of utilities like QuickRes which might be able to do that for you, but you should first verify that keeping it at <120 actually works for you. For me, anything over about 75Hz triggers NVidia's dumb bug, so I just deal with it. Using NVidia inspector did work, but I do a lot of alt-tabbing and whatnot, and whenever the game is not in focus it is not considered by Inspector, and will down-shift the clocks after ~10s, then take another 3-4s to clock back up once I alt-tabbed back. So there's that.

beergod
Nov 1, 2004
NOBODY WANTS TO SEE PICTURES OF YOUR UGLY FUCKING KIDS YOU DIPSHIT
Is there any chance my i5 4590 is bottlenecking my SLI 970s and 16GB RAM?

SlayVus
Jul 10, 2009
Grimey Drawer
Doubtful, but why do you ask?

Adbot
ADBOT LOVES YOU

Odette
Mar 19, 2011

DrDork posted:

There are a couple of utilities like QuickRes which might be able to do that for you, but you should first verify that keeping it at <120 actually works for you. For me, anything over about 75Hz triggers NVidia's dumb bug, so I just deal with it. Using NVidia inspector did work, but I do a lot of alt-tabbing and whatnot, and whenever the game is not in focus it is not considered by Inspector, and will down-shift the clocks after ~10s, then take another 3-4s to clock back up once I alt-tabbed back. So there's that.

OK, so there's no way of actually fixing this until NVIDIA update their drivers? Got it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply