Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Yudo posted:

In the event no one posted it, the rumored specifications of the soon to be released GTX 760:

http://videocardz.com/43001/nvidia-geforce-gtx-760-has-1152-cuda-cores

1152 CUDA cores, 7 GHz memory clock and a nice to see 256 bit memory bus. I am in the market for a sub $300 card and the 192 bit bus of the 660 ti is a deal breaker. I hope there is a 3 GB version.

Wouldn't the hope be for a 4GB version, to ensure efficient memory allocation with the 256-bit controller setup? I know nVidia claims voodoo magic tech helping them with the 600-generation cards that have 192-bit buses with 2GB of VRAM, but I don't buy that you get to have mismatched memory controller allocation without performance loss somehow.

Factory Factory posted:

That suggests there's a 760 Ti still to be had, and the 760 goes head to head with the 660 Ti.

Given the success of their Ti branding, there's gotta be a Ti (or two) this generation as well - but I wonder what it will be. If those leaked specs are accurate for the 760, and we already know the 770 is basically a 680 with a shot in the arm, then where does that leave the 760Ti to go in terms of performance? Halfway between them seems to be an obvious option, but that would make it a price:performance demolisher. Though maybe nVidia is cool with that.

Agreed fucked around with this message at 00:17 on Jun 18, 2013

Adbot
ADBOT LOVES YOU

Yudo
May 15, 2003

Factory Factory posted:

That suggests there's a 760 Ti still to be had, and the 760 goes head to head with the 660 Ti.

There may not be a 760 Ti:

Videocardz.com posted:

According to the leaker, NVIDIA did in fact want to use the GTX 760 Ti naming, but they dropped the Ti idea later, so the card ended up with the pure GTX 760 sexiness.

Ignoring the awfulness of the above sentence and despite seriously cutting down the 104 core, the rumored 760 is clocked so high and has so much more bandwidth that it might perform quite close to the current 670. This does not leave much room to position a Ti--also considering that the 760 will be ~$300.

GTX 770: $400
GTX 760: $300

Where would a Ti fit in?

Edit:

Agreed posted:

Wouldn't the hope be for a 4GB version, to ensure efficient memory allocation with the 256-bit controller setup? I know nVidia claims voodoo magic tech helping them with the 600-generation cards that have 192-bit buses with 2GB of VRAM, but I don't buy that you get to have mismatched memory controller allocation without performance loss somehow.


Your right; didn't think about that, only about being inexpensive. Fitting 3 GBs on there would be more challenging than 4.

Yudo fucked around with this message at 00:21 on Jun 18, 2013

Gonkish
May 19, 2004

Another random query: I do a lot of gaming on PC (I haven't had a console in years), so I'm wondering if it's worth grabbing a 770 with 4GB memory. Is that much RAM a good or bad idea?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It depends entirely on things not yet known! :iiam:

If the next console genration makes big use of the large amounts of RAM they have, and/or you run at 2560x resolution, it is possible that 2 GB of RAM could become a bottleneck to a 770 before you've run out of shader/compute/etc. power. But it's not certain this will be the case, and if you're running at 1080p or so, it might not affect you even so.

randyest
Sep 1, 2004

by R. Guyovich

Shadowhand00 posted:

For comparison's sake, my I7-920 with the 780 (which is a beast) got the following:

Score: 8705
Graphics Score: 10578
Physics Score: 8274
Combined Score: 3871

As I stated earlier, I'm able to run everything I have at 1440p without any hitches. I also don't seem to be having any issues with the drivers but I do believe my card is a bit of an OCing poo poo head. For Crysis 3, anything over +50mhz will result in a crash.

Er, so which 780 is that? I have the EVGA 780 SC ACX factory OC'd and I'm seeing scores 10% lower even with a 570 as a physX card (auto settings.) Without the 570 it's slightly worse.



Maybe I'll borrow an I7-3770 to replace my I5-2500k and see if that makes a difference. Although I can play everything on ultra at good frames I still feel like sometimes it bogs down when I don't understand why. :(

Cavauro
Jan 9, 2008

I wonder if that benchmark number has to do with PCI-E bandwidth issues using both a 780 and 580 on a 2500k-equipped motherboard. But I'm not sure whether or not it would make a difference just telling the 580 not to do anything. (Versus not having it in there)

randyest
Sep 1, 2004

by R. Guyovich

Cavauro posted:

I wonder if that benchmark number has to do with PCI-E bandwidth issues using both a 780 and 580 on a 2500k-equipped motherboard. But I'm not sure whether or not it would make a difference just telling the 580 not to do anything. (Versus not having it in there)
It's slightly worse with the 570 out of the box :(

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Gaming computer 101, MSI:



When your CPU and GPU both throttle based on temperature headroom, DON'T HOOK THEM UP IN A HEATPIPE LOOP TO A SINGLE FAN. How did this get through testing?

It does fine when CPU /or/ GPU is stressed, but when both are:



Then parts that should be much faster perform much worse.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

randyest posted:

It's slightly worse with the 570 out of the box :(

Unless it explicitly connotes otherwise, the Physics score should be CPU based. PhysX adoption is low and 3Dmark doesn't owe nVidia any favors to have their brand of GPGPU physics in the benchmark. Edit: Half right, half wrong. It does use GPGPU physics, but a platform agnostic variety that will in no way take advantage of a headless PhysX coprocessor. In 3Dmark11 for example, Bullet Physics (a standards compliant open physics GPGPU thingy) is used, and in different configurations depending on the test being run. Sometimes the CPU handles it, sometimes the GPU, sometimes it splits duties. In that regard it is similar but not identical to PhysX in most PhysX games, where the CPU has its own physics going on and the GPU does especially fancy stuff. So while it is analogous, it can't be directly compared - and a headless coprocessor will not affect the score one way or another unless it is causing bandwidth loss due to PCI-e lane limitations on the rendering card being tested.

The more you know!

What's your CPU sitting at? His older i7 could be very nicely overclocked, if you're running a stock Sandy Bridge setup that could explain it in one step, sorry if I missed it. He does have a higher GPU score, though, which is a bit odd in this situation.

Now I'm wanting to burn some bandwidth and see how my system does. Though at least you've denied one fear I had, with 1 day until my own 780 is out for delivery: the 780 is not meaningfully bandwidth limited at PCI-e 2.0 8x, unless you're running a 16x/4x PCH setup. :ohdear:

Edit: 1 gig for the installer? Jesus, it's just a benchmark... Alright, here's $10 to AT&T so I can see what my stuff's up to. I guess it's still about a $15 discount once the 780 gets in and I can register it for free, but god drat.

Agreed fucked around with this message at 06:51 on Jun 18, 2013

randyest
Sep 1, 2004

by R. Guyovich

Agreed posted:

What's your CPU sitting at? His older i7 could be very nicely overclocked, if you're running a stock Sandy Bridge setup that could explain it in one step, sorry if I missed it. He does have a higher GPU score, though, which is a bit odd in this situation.
2500k at nominal 3.5GHz with allowed 3.9GHz boost.

I'd be worried if I thought synthetic benchmarks were worth a drat and if my Metro Last Light, Tomb Raider, Bioshock Infinite, and Sleeping Dogs in-game benches and performance weren't so good.

I'll try an i7-3770 to see if it matters and go from there. Trial and error is so tedious.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
You're honestly better off just overclocking that 2500K instead of faffing about with a 3770. You could easily get higher performance with that 2500K in games overclocked than the 3770 at stock. (3770 overclocked is a different story of course).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

To quantify that, here are my score totals from the basic benchmark, with a heavily overclocked GTX 680, running the test off of an SSD, 16GB of 1600MHz DDR3, and a 2600K at 4.7GHz.



Firestrike in particular I got:

Score: 7186
Graphics: 8003
Physics: 12033
Combined: 3033

My graphics score is nearly 2K below yours, 2.5K below his, but my Physics score is double yours, and outpaces his by about 4K. Overclocking a lot pays off in some usage scenarios :shobon:

Get a nice cooler and kick that 2500K in the teeth, Sandy Bridge is born to run and you can probably close the gap nicely.

I do wonder why your graphics score is lower than his, wish I knew each of your clocks. I /definitely/ wish I knew whether you are running in PCI-e 2.0 8x/8x mode or if you're using a motherboard that can do split PCI-e 2.0 16x for the primary slot then PCH the four auxiliary lanes for a PCI-e 2.0 4x lane for your PhysX card.

My motherboard, Sabertooth P67, only runs 8x/8x. It's spare on certain features and that's one of them. Thanks to your results, if you can tell me what your PCI-e 2.0 bandwidth is on the card, I can provide us all with some data on whether and to what extent the GTX 780 is bandwidth limited in PCI-e 2.0; there are reasons to speculate "not very much at all" but also reasons that it might be "more than one would hope."

Agreed fucked around with this message at 10:04 on Jun 18, 2013

Shitty Treat
Feb 21, 2012

Stoopid?

randyest posted:

It's slightly worse with the 570 out of the box :(

The physics score seems to love clock speed and/or more cores, for example my lovely 1090T (4GHz) which gets beat in most other things by the 2500k got nearly 2.5k more than the stock 2500k but gets absolutely obliterated in the physics test by the above 2600k at high clocks.

the runs formula
Feb 23, 2013

by Lowtax
What benchmarking software is that?

beejay
Apr 7, 2002

3dmark11, specifically the Fire Strike portion, which runs last, unless you buy the advanced version, and then you can run it alone.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
If you're wondering how mobile gaming stacks up, here is a 675M with a 3610QM

Shadowhand00
Jan 23, 2006

Golden Bear is ever watching; day by day he prowls, and when he hears the tread of lowly Stanfurd red,from his Lair he fiercely growls.
Toilet Rascal

the runs formula posted:

What benchmarking software is that?

You also get it for free from EVGA right now (if you buy one of their 7-series cards).

For comparison's sake, here is my benchmark:

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
:siren:Please stop posting benchmark results:siren:

Unless you have an interesting or unusual configuration or it's directly in response to a question you can't provide a link for, or it otherwise adds value. It's easy for a thread to get bogged down because a bunch of people think posting their benchmarks is a contribution.

beejay
Apr 7, 2002

So EVGA just basically told me to get hosed - are there any GPU manufacturers with as solid of customer service as they used to have? Nvidia or AMD.

Colonel Pancreas
Jun 17, 2004


Yudo posted:

In the event no one posted it, the rumored specifications of the soon to be released GTX 760:

http://videocardz.com/43001/nvidia-geforce-gtx-760-has-1152-cuda-cores

1152 CUDA cores, 7 GHz memory clock and a nice to see 256 bit memory bus. I am in the market for a sub $300 card and the 192 bit bus of the 660 ti is a deal breaker. I hope there is a 3 GB version.

As someone who knows very little to nothing about what makes one GPU better than the other, could anyone break down what the major differences between the 670 and 760 would be, assuming the specs listed on that site are accurate? Specifically, I'm wondering which will serve me better for gaming at 1920x1200. The 760 looks better in some regards, while the 670 does in others.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

beejay posted:

So EVGA just basically told me to get hosed - are there any GPU manufacturers with as solid of customer service as they used to have? Nvidia or AMD.

How'd we get to this post from

beejay posted:

That's weird, I just did an RMA with EVGA and it was super fast. I put it in on a Saturday even I think and they answered within minutes. You can't actually do the RMA part until you get a ticket going or they will reject it. I'd try emailing them again.

that one? What happened?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Colonel Pancreas posted:

As someone who knows very little to nothing about what makes one GPU better than the other, could anyone break down what the major differences between the 670 and 760 would be, assuming the specs listed on that site are accurate? Specifically, I'm wondering which will serve me better for gaming at 1920x1200. The 760 looks better in some regards, while the 670 does in others.

Okay.
pre:
			670		760
GPU			GK104		GK104
Shaders			1344		1152
Shader clock		915 MHz		1072 MHz
Shader throughput	1.229 Giga	1.234 Giga
in shaderhertz
Boost clock algorithm	1.0 (TDP only)	2.0 (TDP + temp)
VRAM			2 GB		2 GB
VRAM clock		6 GHz		7 GHz
VRAM bus		256-bit		256-bit
VRAM bandwidth		192 GB/s	224 GB/s
Other poo poo like ROPs	Whatever	It's the same
You can think of a 760 as a 670 with a VRAM overclock and maybe a bit better boost clocking.

WaffleLove
Aug 16, 2007

beejay posted:

So EVGA just basically told me to get hosed - are there any GPU manufacturers with as solid of customer service as they used to have? Nvidia or AMD.

XFX gave me an RDM after a new card I bought was BOSD my computer. No issues, no problems. They even went as far, as if the replacement didnt work(re same issue), I would of been able to discuss other options of same value.

beejay
Apr 7, 2002

Agreed posted:

How'd we get to this post from


that one? What happened?

It's a long story. I'll PM you in a minute. Basically I have had to RMA multiple times and today I got a call from a "manager" who was very rude and unhelpful and I'll probably be selling a 660ti on SA-mart soon.

Squibbles
Aug 24, 2000

Mwaha ha HA ha!
Anecdotes I guess but I had not bad service customer with MSI. The only problem was that their card design was flawed (Nvidia 570 reference design) so even after 3 RMA's a card would never last for more than a few months without artifacting and degenerating into BSOD's. Also the first time I RMA'd with MSI I got a brand new card in original box, the second time it came in a generic box so I don't know if it was refurbed or what. Third RMA wasn't by me (I gave the card away) so I don't know what they got in replacement but I do know it didn't last either.

I haven't had to deal with Asus's support yet because their design actually worked out of the box.

randyest
Sep 1, 2004

by R. Guyovich
Alereon I totally agree benchmark postfests lead to bad whitenoise posting, but I think right now we're all collaborating pretty well to investigate interesting aspects of CPU / GPU / PhysX tradeoffs. If this is not OK please let me know and I'll edit out.

Agreed posted:

What's your CPU sitting at? His older i7 could be very nicely overclocked, if you're running a stock Sandy Bridge setup that could explain it in one step, sorry if I missed it. He does have a higher GPU score, though, which is a bit odd in this situation.
It was at stock 3.5GHz with 3.9GHz boost. I have an ASUS P8Z68-pro and a coolermaster hyper 212 cooler, but never bothered to OC since I was getting what I wanted. I let the ASUS utility OC it to "fast" (not "extreme") and it bumped it up to 4.3GHz. After the OC the 3dmark physics result scaled even higher. +23% in CPU speed bumped the physics scores by ~30%, with or without the 570 in as a physX GPU, which seems weird. No change in graphics or combined though, which is not as weird. v:shobon:v


Maybe I'll try "extreme" OC to see what happens, but this is looking pretty good to me as it is. Thanks for all the input I think my money would be better spent on a 2nd 780 ACX and better cooling to OC my 2500k instead of a 3770k (which I guess might be harder to OC.) Basically, this:

Agreed posted:

Get a nice cooler and kick that 2500K in the teeth, Sandy Bridge is born to run and you can probably close the gap nicely.
There's clearly little benefit from going 2500k to 3770k, and since my goal is 60+ FPS min on anything available now on ultra max'ed everything, I'm planning on another 780 and a 1kW PSU (that sounds crazy but on the other hand it makes total sense.)

Agreed posted:

I do wonder why your graphics score is lower than his, wish I knew each of your clocks. I /definitely/ wish I knew whether you are running in PCI-e 2.0 8x/8x mode or if you're using a motherboard that can do split PCI-e 2.0 16x for the primary slot then PCH the four auxiliary lanes for a PCI-e 2.0 4x lane for your PhysX card.
How do I check that for sure? P8Z68-pro, if I recall, claimed 16x x 2 but I may be mistaken or remembering wrong.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
You find out looking at the specs for the board. The P8Z68 pro has:

  • 2 x PCIe 2.0 x16 (x16 or dual x8)
  • 1 x PCIe 2.0 x16 (x4 mode, black) *1
  • 2 x PCIe x1
  • 2 x PCI

The CPU hands out 16 PCIe 2.0 lanes (in your case; IVB/HSW hand out 16 3.0 lanes). On SNB/Z68, these can only be organized in x16 or x8/x8.



The PCH has 8 PCIe lanes to hand out. These get pared down by peripherals - extra SATA controllers, FireWire, non-Intel NICs, etc. The rest are routed to an expansion slot, i.e. to the black PCIe x16 (x4 electrical) slot and the two PCIe x1 slots. The slots may be overloaded, especially on Asus boards, so the x4 slot may only work at x1 electrical if certain peripherals or other PCIe x1 slots are used - check the manual/BIOS.

Remember that footnote marker on the x16 (x4 electrical) slot above? Here's the footnote:

*1: The PCIe x16_3 slot shares bandwidth with PCIe x1_1 slot, PCIe x1_2 slot, USB3_34 and eSATA. The PCIe x16_3 default setting is in x1 mode.

So no PCIe x4 or x1 slot can be a slot hooked up to the CPU on SNB/Z68.

Z77 and Z87 can get more complex because they offer x8/x4/x4 splits from the CPU:



So you may need to check reviews and/or the manual to see how lanes are allocated when particular slots are filled.

Factory Factory fucked around with this message at 23:10 on Jun 18, 2013

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

According to your motherboard's specs page, it's basically running two 16x that split to 8x/8x when running dual card (for SLI, for PhysX, doesn't matter).

http://www.asus.com/Motherboards/P8Z68V_PRO/#specifications

Depending on what slots you've got the cards plugged into, it is possible (all values relative to PCI-e 2.0) that you've got a 780 at 8X and a 570 at 8X, or a 780 at 16x and a 580 at 16x, or a 780 at 16x and a 570 at 4x via PCH.

If they're adjacent it's most likely the 8x/8x split.

Could you download GPU-Z and look at this here box to see what it says about your 780?



Not what it's capable of but what it's actually running at.

Edit: Factory Factory beat me to the post, because gently caress PHONE INTERNET, but please do this so we can see if your 780 is running unbridled or what the deal is there.

Also, do not expect a dedicated PhysX card to improve your score. 3Dmark uses Bullet Physics, it runs on the rendering card only, and isn't proprietary like PhysX. Wouldn't be much of a benchmark if only nVidia cards could use it (not like it's much of a benchmark anyway, down to brass tacks - in-game matters all, these numbers are helpful in their limited scope but shouldn't be taken as determinative of much of anything).

Also, you are a certified crazy person for going with two 780s, but I'd probably spend the money if I had it too, so I'm gonna refrain from judgment; just don't let Crackbone get wind of it or you'll give him a seizure, and for good reason ;)

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

randyest posted:

Alereon I totally agree benchmark postfests lead to bad whitenoise posting, but I think right now we're all collaborating pretty well to investigate interesting aspects of CPU / GPU / PhysX tradeoffs. If this is not OK please let me know and I'll edit out.
I'm not trying to ban all posting of benchmarks, feel free to post anything you genuinely feels is beneficial to the thread. My main point is that I don't see what posting screenshots of 3DMark scores does to help since you can just use the online result browser to get scores for various system configurations. I want to avoid a situation where people think "oh we're posting benchmarks now" and the thread is unreadable for a page or two, because this is a pretty good thread with good discussion and I like reading it.

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.
Yeah, and you should try manually setting your 2500K instead of letting that weird Asus suite do its weird things. I've been running mine stable at 4.7Ghz since I've had it with a few adjustments (voltage offset capped at 1.36v).

dog nougat
Apr 8, 2009

beejay posted:

It's a long story. I'll PM you in a minute. Basically I have had to RMA multiple times and today I got a call from a "manager" who was very rude and unhelpful and I'll probably be selling a 660ti on SA-mart soon.

Don't abuse the system man :v:.

But seriously, that's odd. I managed to break a capacitor (I think) off the back of my card while putting it back to factory spec and they still honored my RMA. Well sort of, there's going to be some cost associated with the repair now, but considering I voided my warranty, it was pretty awesome of them. Granted this happened after the RMA was approved and I emailed them about it and took photos it to document the damage. It's still really good of them...in theory at least, I guess when I find put what it will cost me I may sing a different time. Still it's probably better than me having a $300 paperweight and having to shell out for a new card.

randyest
Sep 1, 2004

by R. Guyovich

Agreed posted:

According to your motherboard's specs page, it's basically running two 16x that split to 8x/8x when running dual card (for SLI, for PhysX, doesn't matter).

... great helpful :words: ...
Thank you very much for the GPU-Z link; I've used the CPU version for years but didn't know a GPU version was a thing. And you are correct; it's 8x/8x on both:

Agreed posted:

Also, do not expect a dedicated PhysX card to improve your score. 3Dmark uses Bullet Physics, it runs on the rendering card only, and isn't proprietary like PhysX. Wouldn't be much of a benchmark if only nVidia cards could use it (not like it's much of a benchmark anyway, down to brass tacks - in-game matters all, these numbers are helpful in their limited scope but shouldn't be taken as determinative of much of anything).
That's enlightening. I didn't know that and it does help me understand why adding the 570 for physX doesn't make much difference in 3dmark but makes a significant difference in Metro Last Light and Borderlands 2. Unfortunately it also seems to turn Lara's head into a giant blurry frizzy white girl afro on Tomb Raider when hair quality is set to high/tressFX, so it's not always a plus.

Agreed posted:

Also, you are a certified crazy person for going with two 780s, but I'd probably spend the money if I had it too, so I'm gonna refrain from judgment; just don't let Crackbone get wind of it or you'll give him a seizure, and for good reason ;)
No argument there; I have more dollars than sense this year thanks to a ridiculous unexpected bonus, and I've already spent everything I can on my wife, house, cars, retirement funds, etc. I know crazy CPU/GPU stuff is not an "investment" but it makes me happy, keeps me out of trouble, and we like to play games together and have a lot of fun and she also likes to push the graphical quality / performance, so a few grand isn't a big issue as long as I'm getting near max bang for buck, which you guys are really awesome at helping me do.

Seriously, thanks a ton to everyone here. I'm going to rip out the 570 and make sure the 780 is running at 16x in that case and re-bench everything. If you've replied to me in this thread and you want a forum upgrade or cert or just some paypalbux please pm or email me (username at gmail) and I'll set you up.

TheRationalRedditor posted:

Yeah, and you should try manually setting your 2500K instead of letting that weird Asus suite do its weird things. I've been running mine stable at 4.7Ghz since I've had it with a few adjustments (voltage offset capped at 1.36v).
How? Straight BIOS modifications? I thought the (admittedly weird) ASUS tool was pretty cool since it does the trial and error for me. Am I being dumb?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Not BIOS modding, BIOS settings. It exposes the same manual settings as AI Suite, except it's a good bit more stable for pushing the envelope, whereas in-Windows tools can freeze for no reason or lock up on settings that would work if entered from the BIOS.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Some news on the GPGPU and HPC front:

Intel announced its refresh of the Xeon Phi, "Knight's Landing." It's still not really a GPU, but it's still a many-core, highly-parallel processor, Basically a 61-core, 64-bit, P55C-Pentium-derived chip. What's different is that instead of strictly being a coprocessor unit (a system on a board with network interfaces over PCIe), the Knight's Landing Xeon Phi will also come in socketed versions, and it will be able to be the system's primary processor. This is kinda like running Linux natively on a GeForce Titan.

With all the hubbub of AMD's ARM-based microserver CPUs and APUs with integrated Radeon bits and the upcoming Intel Atom refresh, we're about to see a new wave of high-density, many-core computers that sit in between where CPUs and GPUs currently preside.

The eventual goal of both AMD and Intel's heterogeneous systems architecture movements for x86 is to allow seamless single-system switching between all these processors from a shared memory space. GPUs and GPU-like architectures handle graphics and other workloads where you need a lot of calculations, but the calculations themselves are independent and simple; CPUs handle tasks with complex single-threaded needs; and many-core architectures handle the tasks in between, like virtualization farms of simple webservers and webapps.

Knights Landing also has a preview of a tech we'll probably start seeing in Nvidia's Maxwell and the next major GCN revision, memories dies integrated to the chip package. There are a number of ways this can be organized (e.g. as a layer of cache above L1/L2 etc., as an independent cache pool, or attached to a memory controller and treated like RAM), but the core idea is that with memory that close to the core that needs it, you can get REALLY high bandwidth out of it. Depending on how the memory is integrated to the package, it can cause big problems for cooling the compute-oriented part of the chip, though.

--

The other big news is that Nvidia's Kepler architecture can now be licensed as IP blocks, and future architectures can be, too.

This kinda like holy poo poo considering that Nvidia already has a strong SoC business combining ARM cores with their graphics hardware, but now they are opening that up for everyone. We're talking anyone who uses IP blocks being able to stick GeForces on it.

And you know who uses IP blocks? Everyone. Not just Apple and Qualcomm and Samsung. Intel is doing it. AMD is doing it. We're talking the possibility for Intel to drop its GPU development and just stick Goddamn Keplers on things. We're talking AMD APUs with x86, ARM, and GeForce cores. Cats and dogs living together. Mass hysteria.

Eventually, this may mean semi-custom GPUs not just at the card level, but at the SMX level. Apple offering customized GPUs in its systems with Apple-only tweaks that are more than just a BIOS lock. CUDA could kill OpenCL forever, as long as you're willing to give Nvidia a piece of the pie.

That doesn't mean Nvidia is quitting the chip business, though, oh no. In fact, the next logical step is licensing an LTE modem for their own SOCs and getting into the phone business. But that's a topic for another thread.

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


If it means we get Intel-quality drivers for nVidia graphics, even if it's just for R-class Intel CPUs, I'll probably react how the AIs said humanity reacted to the 'ideal world' version of the Matrix in one of that movie's terrible sequels.

But the odds of that happening are a Large Number to one against.

metasynthetic
Dec 2, 2005

in one moment, Earth

in the next, Heaven

Megamarm
Got my dual 770 4GB classifieds from EVGA yesterday, and ran into an unexpected snafu: apparently with some mobo / 700 series combos from various manufacturers (including mine, a Z87 Extreme4), you can't enable SLI and surround screens simultaneously. Activating one toggles the other off. I couldn't find any solutions by googling. If anyone has any ideas please share, but I think I'm boned until a new driver release fixes it.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
That... hm. That looks like just the state of things, considering no SLI mode surround is offered on the Nvidia 3D Surround webpage. That kinda blows.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


What the gently caress :psyduck:

This bit of news has incredibly wide-ranging implications and I can't even begin to think... god drat, they're doing it in a way that leverages practically nothing on their end and positions them to be a major player in virtually anything, all on the basis of exceptional designs. And given their own diverse interests, it's just... it's... hooooly poo poo.

nVidia is gonna get so much richer because of this. They're licensing... seriously? It's brilliant and a little bit crazy and I love it, haha, talk about a bold move.


Edit: The funniest scenario involves, as mentioned, Intel and AMD licensing nVidia IP for easy integration into APUs or SoCs. Or AMD licensing CUDA, that might actually happen, who knows - UE4 will power a ton of games in the next generation and it has PhysX support, being able to compute a light CUDA workload would provide AMD with a competitive advantage, or Intel with another selling point for their IGPUs if it does become a thing.

This is crazy, but at the same time it also makes total sense. We desktop users are dinosaurs in the market, integration is everything now. This is so weird, though, it totally changes who they were competing with as of a few days ago. I mean, they had Tegra, but now... Wow.

Edit 2: It's license-available as soon as it's on paper! Funny situation number three (this one is not gonna happen, but bear with me): AMD buys a license for nVidia's next gen chip the very moment it gets taped out, and produces the first nVidia designed card before nVidia gets it out to partners for production. An actual thing that can happen now. Not going to happen for a lot of obvious reasons, but it could. Haha, what in the world. Strange days.

Agreed fucked around with this message at 04:04 on Jun 19, 2013

metasynthetic
Dec 2, 2005

in one moment, Earth

in the next, Heaven

Megamarm

Factory Factory posted:

That... hm. That looks like just the state of things, considering no SLI mode surround is offered on the Nvidia 3D Surround webpage. That kinda blows.

gently caress me. I sincerely hope this is a temporary issue, otherwise I'll return these and just get a 780.

Edit: how did this review get this to work?

http://www.legitreviews.com/article/2210/1/

I've tried a few variations based on the color coded diagrams they list. I don't have the cables available to try a pure DVI based setup, maybe that's it? I've got 2 pure DVI cables, 1 HDMI, and 1 DVI-to-HDMI. Ugh.

metasynthetic fucked around with this message at 06:22 on Jun 19, 2013

Adbot
ADBOT LOVES YOU

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Agreed posted:

nVidia is gonna get so much richer because of this. They're licensing... seriously? It's brilliant and a little bit crazy and I love it, haha, talk about a bold move.
Realistically, there are only two potential licensees. Neither one nets a ton of money.

Apple's an interesting possibility and would net a fair amount of money (it would probably kill Imagination in the process). Apple builds their own CPUs, has no problem building relatively enormous mobile chips, and doesn't care about margins nearly as much as other vendors, so it could take a next-gen Tegra part relatively early and put a low-clocked enormous die in a tablet. A5X in the iPad 3 was 2x the size of Tegra 3 because Apple can make the SoC cost back on the tablet; there's no SoC vendor that has to make money and a final OEM that has to make money). However, this isn't a ton of money, as Imagination's total revenue was ~$200M last year, and that's with owning every iPhone and iPad sold (along with a bunch of other processors).

Samsung is more difficult to understand because it doesn't build its own CPUs--it takes stock ARM cores. If Samsung continues to do that, why wouldn't they just use Tegra? Licensing a GPU core wouldn't net them anything meaningful there versus buying Tegra that I can see.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply