Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gwaihir
Dec 8, 2009
Hair Elf

Seamonster posted:

So can small polaris fit into laptops or what?

I mean, some OEMs shove desktop GTX980s in to laptops. So sure!!!

Adbot
ADBOT LOVES YOU

penus penus penus
Nov 9, 2014

by piss__donald
I wish I could just rent a VR headset or something

SlayVus
Jul 10, 2009
Grimey Drawer

THE DOG HOUSE posted:

I wish I could just rent a VR headset or something

The face rest leaves something to be desired in the way of hygiene. None of them are pleather or leather.

HMS Boromir
Jul 16, 2011

by Lowtax
I wonder if VR Cafés will pop up.

Verizian
Dec 18, 2004
The spiky one.

SlayVus posted:

The face rest leaves something to be desired in the way of hygiene. None of them are pleather or leather.

I saw somewhere that the Vive will ship with two different face rests and you'll be able to order replacements at a "disposable price point". Would allow for VR cafes, rentals and Derren Brown's Ghost Train VR ride at Thorpe park. https://www.thorpepark.com/rides/derren-browns-ghost-train/

NewFatMike
Jun 11, 2015

HMS Boromir posted:

I wonder if VR Cafés will pop up.

Like a place where people might be able to purchase time at a video game machine... An... Arcade of sorts.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

NewFatMike posted:

Like a place where people might be able to purchase time at a video game machine... An... Arcade of sorts.

No such thing exists anymore. There's a reason that in most of the Dave and Busters now, they put the 'buy your game card' thing *well* away from what constitute 'games' nowadays.

"Come on down...and play large-screen versions of the games you already have on your phone!"

jisforjosh
Jun 6, 2006

"It's J is for...you know what? Fuck it, jizz it is"
So Polaris is going to be running GDDR5 and not HBM2 correct? Is there really going to be an appreciable difference in performance between the 2? Memory architectures have never been a thing I pay attention to.

penus penus penus
Nov 9, 2014

by piss__donald

SlayVus posted:

The face rest leaves something to be desired in the way of hygiene. None of them are pleather or leather.

Yeah I'd be disinfecting the poo poo out of it before use

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

jisforjosh posted:

So Polaris is going to be running GDDR5 and not HBM2 correct? Is there really going to be an appreciable difference in performance between the 2? Memory architectures have never been a thing I pay attention to.

You just need to have enough memory bandwidth as to not hold back the GPU, from the leaked specs for Pascal they were able to get GDDR5 up to 512GB/sec in the X80 Ti which I am assuming will be fine since that's a pretty big jump from the 336GB/sec that the Titan X/980Ti had. HBM2 is able to do 1TB/sec but that's probably more bandwidth than even the X80 Titan will require for optimum performance, good to have though.

EDIT: The leaked specs for Polaris claim that they're using GDDR5X in which case I wouldn't be too worried about memory performance.

MaxxBot fucked around with this message at 22:24 on Mar 24, 2016

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

MaxxBot posted:

You just need to have enough memory bandwidth as to not hold back the GPU, from the leaked specs for Pascal they were able to get GDDR5 up to 512GB/sec in the X80 Ti which I am assuming will be fine since that's a pretty big jump from the 336GB/sec that the Titan X/980Ti had. HBM2 is able to do 1TB/sec but that's probably more bandwidth than even the X80 Titan will require for optimum performance, good to have though.

EDIT: The leaked specs for Polaris claim that they're using GDDR5X in which case I wouldn't be too worried about memory performance.

If they're using GDDR5X I'd be much more worried by the timetable than the performance.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Continuing on the rolling trend of Polaris leaks

"Cutdown" Polaris 10 is a 2304 shader core, 256-bit bus, 8GB card

It's apparently faster than a 290X right now, despite the fewer shader cores, lower bandwidth, and lower clocks (about a 40% increase in shader core performance?).

Spitballing here, but this is either their 470 or 480 card to hit their ~200$ price target (as this would be entry VR). Future 470X and 480X I'm going to bet will be GDDR5X to double the bandwidth.

Still there is a pretty massive gulf between Polaris10 and Polaris11 capability (and it looks like Polaris11 is a 4GB, 128-bit bus card based on this, probably with 1024 shader cores), so maybe yields on Polaris10 are such that even a half cutdown Polaris10 will get pushed out as a 470 (so line up is a Polaris11 12CU, Polaris11 16CU, Polaris10 22CU, Polaris10 32CU, Polaris10 36CU and Polaris10 44CU, maybe?).

EDIT:

EmpyreanFlux fucked around with this message at 23:06 on Mar 24, 2016

Bleh Maestro
Aug 30, 2003

FaustianQ posted:

Continuing on the rolling trend of Polaris leaks

"Cutdown" Polaris 10 is a 2304 shader core, 256-bit bus, 8GB card

It's apparently faster than a 290X right now, despite the fewer shader cores, lower bandwidth, and lower clocks (about a 40% increase in shader core performance?).

Spitballing here, but this is either their 470 or 480 card to hit their ~200$ price target (as this would be entry VR). Future 470X and 480X I'm going to bet will be GDDR5X to double the bandwidth.

Still there is a pretty massive gulf between Polaris10 and Polaris11 capability (and it looks like Polaris11 is a 4GB, 128-bit bus card based on this, probably with 1024 shader cores), so maybe yields on Polaris10 are such that even a half cutdown Polaris10 will get pushed out as a 470 (so line up is a Polaris11 12CU, Polaris11 16CU, Polaris10 22CU, Polaris10 32CU, Polaris10 36CU and Polaris10 44CU, maybe?).

EDIT:


How come clocks are so low ?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Bleh Maestro posted:

How come clocks are so low ?

It's a validation sample is my best guess, but the worst possibility is some limitation in GCNs design. We'll see by June, but I'm betting normal clocks for Polaris will be 1050-1100, making no bets on it's ability to overclock though but 1300 would be safe.

SlayVus
Jul 10, 2009
Grimey Drawer

MaxxBot posted:

You just need to have enough memory bandwidth as to not hold back the GPU, from the leaked specs for Pascal they were able to get GDDR5 up to 512GB/sec in the X80 Ti which I am assuming will be fine since that's a pretty big jump from the 336GB/sec that the Titan X/980Ti had. HBM2 is able to do 1TB/sec but that's probably more bandwidth than even the X80 Titan will require for optimum performance, good to have though.

EDIT: The leaked specs for Polaris claim that they're using GDDR5X in which case I wouldn't be too worried about memory performance.

Memory bandwidth isn't a problem I thought for today's graphics. The major problem, at least for like Nvidia, is render performance.

japtor
Oct 28, 2005

FaustianQ posted:

It's a validation sample is my best guess, but the worst possibility is some limitation in GCNs design. We'll see by June, but I'm betting normal clocks for Polaris will be 1050-1100, making no bets on it's ability to overclock though but 1300 would be safe.
Could it be used for mobile at the lower clocks or would the part still be too hot and big for that use (vs Polaris 11 or whatever)?

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

japtor posted:

Could it be used for mobile at the lower clocks or would the part still be too hot and big for that use (vs Polaris 11 or whatever)?

They've stuffed desktop 980s in laptops. Any polaris will do fine.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

SlayVus posted:

Memory bandwidth isn't a problem I thought for today's graphics. The major problem, at least for like Nvidia, is render performance.

Yeah, that's why they don't necessarily have to switch over to HBM right this instant because GDDR is still adequate for now but not too much longer because it's being pushed close to its limits. HBM also has the advantage of lower power consumption which leaves a larger TDP budget for the GPU.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

Yeah, that's why they don't necessarily have to switch over to HBM right this instant because GDDR is still adequate for now but not too much longer because it's being pushed close to its limits. HBM also has the advantage of lower power consumption which leaves a larger TDP budget for the GPU.

It totally depends on the chip performance. With something 390X class, or even Fury/Fury X class, there's no need for HBM. The 980 Ti proved that.

Also, the TDP difference is really negligible. Yeah, HBM eats like 50% less than GDDR5 - but you are talking about dropping from a power budget of 31.5W on the 980 Ti to 15W on the Fury X. A delta of 17W is noise that's overwhelmed by the aftermarket clockrates of 28nm let alone the difference you get from switching to 16nm. Maybe a 22W total delta if you didn't increase bandwidth at all (Fury X peak b/w is ~50% higher than 980 Ti and power consumption scales with bandwidth consumed).

Paul MaudDib fucked around with this message at 02:35 on Mar 25, 2016

SwissArmyDruid
Feb 14, 2014

by sebmojo
They don't have to necessarily switch over to HBM right now, because 5X hit the scene, which lets AMD/Nvidia basically drop-in 5X where 5 would previously have been used with a minimal amount of tweaking.

That said, HBM still has material benefits over GDDR of any flavor in the form of *latency*, which I expect may become much more relevant as time goes on.

EoRaptor
Sep 13, 2003

by Fluffdaddy

SwissArmyDruid posted:

They don't have to necessarily switch over to HBM right now, because 5X hit the scene, which lets AMD/Nvidia basically drop-in 5X where 5 would previously have been used with a minimal amount of tweaking.

That said, HBM still has material benefits over GDDR of any flavor in the form of *latency*, which I expect may become much more relevant as time goes on.

The big switch to HBM will be driven by how expensive it is to produce PCB's and stick things to them.

HBM enormously simplifies the PCB design by removing the single largest routing requirement, and probalby knocks 2 or 4 layers right off. Removing the pick and place requirements is also a cost savings, and being able to test the chip and memory before it leaves the foundry should also reduce post board assembly failure rates.

The switchover will probably be very sudden, as the cost threshold is very one-way.

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

I'd kind of expect that switchover to percolate through form factors, as the smaller the device the more the other savings matter, and the smaller and cheaper the interposer can be.

kode54
Nov 26, 2007

aka kuroshi
Fun Shoe

SlayVus posted:

The face rest leaves something to be desired in the way of hygiene. None of them are pleather or leather.

I don't know about you, but my sweat eats holes in those materials.

EoRaptor
Sep 13, 2003

by Fluffdaddy

xthetenth posted:

I'd kind of expect that switchover to percolate through form factors, as the smaller the device the more the other savings matter, and the smaller and cheaper the interposer can be.

The price set by the GPU chipmaker will probably matter the most.

The scenario I predict is that, when HBM begins to show up in the mid tier, you'll end up with the following:

We will reach this type of price equation for manufacturers:

Mid tier:
Chip: $60
Board+parts: $50
Validation and packaging: $20
Shipping: $10
Overhead (RMA, failed boards, etc): $10

Low tier:
Chip: $30
Board+parts: $65
Validation and packaging: $25
Shipping: $10
Overhead (RMA, failed boards, etc): $15

5 dollars difference in manufacturing will work out to about 20 dollars at retail. There will be a bunch of profit taking at first, but competition will squeeze that out. Suddenly, the manufacturer won't be selling as many low tier vs the mid tier, becuase people will get a huge performance boost for a very low additional sum, and the price to manufacture the low tier will go up as volumes drop. Everybody will push HBM into the price gap left by the low tier disappearing, driving volumes higher and prices lower.

We won't see the same thing at the high tier, because the premium price will hold the gap in retail price between high and mid wide enough to maintain both markets. Once GPU chipmakers introduce HBM at the mid tier, expect it to dominate almsot imemdiately, and the low tier to dry up very quickly.

sauer kraut
Oct 2, 2004

FaustianQ posted:

Continuing on the rolling trend of Polaris leaks

"Cutdown" Polaris 10 is a 2304 shader core, 256-bit bus, 8GB card

It's apparently faster than a 290X right now, despite the fewer shader cores, lower bandwidth, and lower clocks (about a 40% increase in shader core performance?).

That looks really good, if that's the new 250$ value card I'd be happy.
At least no one is talking about AMD pushing a crappy 4GB HBM1 card anymore, yuck.

SwissArmyDruid
Feb 14, 2014

by sebmojo

EoRaptor posted:

The big switch to HBM will be driven by how expensive it is to produce PCB's and stick things to them.

HBM enormously simplifies the PCB design by removing the single largest routing requirement, and probalby knocks 2 or 4 layers right off. Removing the pick and place requirements is also a cost savings, and being able to test the chip and memory before it leaves the foundry should also reduce post board assembly failure rates.

The switchover will probably be very sudden, as the cost threshold is very one-way.

No, the big switch to HBM will be driven by HBM2 availability, since that's not a drop-in replacement for HBM1, due to having a larger footprint than HBM1. Also, the pick-and-place requirements are still there, but instead of being on a millimeter scale of accuracy for GDDR5/X chips, it's on a nanometer scale to get those microbumps to line up on the interposer. =P

SwissArmyDruid fucked around with this message at 09:17 on Mar 25, 2016

BOOTY-ADE
Aug 30, 2006

BIG KOOL TELLIN' Y'ALL TO KEEP IT TIGHT

EoRaptor posted:

The price set by the GPU chipmaker will probably matter the most.

The scenario I predict is that, when HBM begins to show up in the mid tier, you'll end up with the following:

We will reach this type of price equation for manufacturers:

Mid tier:
Chip: $60
Board+parts: $50
Validation and packaging: $20
Shipping: $10
Overhead (RMA, failed boards, etc): $10

Low tier:
Chip: $30
Board+parts: $65
Validation and packaging: $25
Shipping: $10
Overhead (RMA, failed boards, etc): $15

5 dollars difference in manufacturing will work out to about 20 dollars at retail. There will be a bunch of profit taking at first, but competition will squeeze that out. Suddenly, the manufacturer won't be selling as many low tier vs the mid tier, becuase people will get a huge performance boost for a very low additional sum, and the price to manufacture the low tier will go up as volumes drop. Everybody will push HBM into the price gap left by the low tier disappearing, driving volumes higher and prices lower.

We won't see the same thing at the high tier, because the premium price will hold the gap in retail price between high and mid wide enough to maintain both markets. Once GPU chipmakers introduce HBM at the mid tier, expect it to dominate almost immediately, and the low tier to dry up very quickly.

I can definitely see that happening and it sort of ties in with a previous post I made about low-end cards eventually switching to 4GB. I can't see card manufacturers using HBM1 and limiting it to anything less than the 4GB max and with the higher bandwidth and potentially smaller cards, HBM1 could dominate the low-end market. Saves having to make so many cards with different types of memory or varying bus widths like the current market, which I can't imagine is cheap to do. Ideally I'd love to see low end cards use 4GB HBM1, then mid-range with the 8-16GB HBM2, and high-end with the 16GB+ and up and possibly niche cards like dual GPU. Hopefully with all the changes it'll mean the end of foot-long video cards and absurd air or liquid cooling being required for stability at stock clocks.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
How does GPU/hardware manufacturing work anyways? NVIDIA/AMD develops a card (a so called reference card?) and then licenses the tech to people like MSI who get schematics and I presume some kind of "chip" that's the heart of what makes a 9xx a 9xx and then MSI orders the rest of the components and outsources to some factory to assemble it all together? What's exactly the difference between an ASUS and MSI card in this sense? Are they allowed to use different amounts of memory and even decide to use cheaper older gen (HBM vs DDR...?) as they wish?

e: Or is it literally just a difference in stock vs aftermarket cooler?

Boris Galerkin fucked around with this message at 16:06 on Mar 25, 2016

penus penus penus
Nov 9, 2014

by piss__donald

Boris Galerkin posted:

How does GPU/hardware manufacturing work anyways? NVIDIA/AMD develops a card (a so called reference card?) and then licenses the tech to people like MSI who get schematics and I presume some kind of "chip" that's the heart of what makes a 9xx a 9xx and then MSI orders the rest of the components and outsources to some factory to assemble it all together? What's exactly the difference between an ASUS and MSI card in this sense? Are they allowed to use different amounts of memory and even decide to use cheaper older gen (HBM vs DDR...?) as they wish?

e: Or is it literally just a difference in stock vs aftermarket cooler?

You are basically correct originally. However what I don't know is when aftermarket companies use a reference board did they build that themselves based on schematics. Typically in the past an aftermarket company can change the amount of ram on the card, choose different components (namely power delivery), inputs and outputs and so on as long as they meet the specifications they are supposed to. Nvidia for example has "greenlight" which gives minimum standards for specs.

So they receive the chip from AMD or nvidia and build the board themselves in a lot of cases. I wouldn't necessarily say MSI (for example) outsources to some factory though, they are that factory. Of course some brands do outsource, like EVGA.

What is interesting about HBM is that I don't know if they can change the amount of ram anymore going forward.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Boris Galerkin posted:

How does GPU/hardware manufacturing work anyways? NVIDIA/AMD develops a card (a so called reference card?) and then licenses the tech to people like MSI who get schematics and I presume some kind of "chip" that's the heart of what makes a 9xx a 9xx and then MSI orders the rest of the components and outsources to some factory to assemble it all together? What's exactly the difference between an ASUS and MSI card in this sense? Are they allowed to use different amounts of memory and even decide to use cheaper older gen (HBM vs DDR...?) as they wish?

e: Or is it literally just a difference in stock vs aftermarket cooler?

Some manufacturers just put on an aftermarket cooler on the stock card design while some buy just the GPU and design both a custom PCB and cooler. Using different memory types wouldn't work but I'm not sure if different amounts has ever been done.

Durinia
Sep 26, 2014

The Mad Computer Scientist

THE DOG HOUSE posted:

What is interesting about HBM is that I don't know if they can change the amount of ram anymore going forward.

For a given GPU design point, you'll be able to choose different stack heights (2,4,8-high) to put into the card. It's also probably quite possible to only enable some of the designed-in HBM sites. So, if you have some GPU with 4 HBM sites, you could populate all of them for 8/16/32 GB capacity at 1 TB/s, or you could perhaps just populate half of them for 4/8/16 GB capacity at 512GB/s. You can look at this much like their different SM counts - they'd bin bonding yields into different level parts.

This is locked in by the GPU vendor, as it's all inside the package the board builders get. I'm not 100% sure how it works with GDDR5 and the board people - their choices would be pretty limited (site count/bus width is fixed) if they have any choice at all. G5 doesn't have a lot of capacity variation per chip.

Ozz81 posted:

I can definitely see that happening and it sort of ties in with a previous post I made about low-end cards eventually switching to 4GB. I can't see card manufacturers using HBM1 and limiting it to anything less than the 4GB max and with the higher bandwidth and potentially smaller cards, HBM1 could dominate the low-end market. Saves having to make so many cards with different types of memory or varying bus widths like the current market, which I can't imagine is cheap to do. Ideally I'd love to see low end cards use 4GB HBM1, then mid-range with the 8-16GB HBM2, and high-end with the 16GB+ and up and possibly niche cards like dual GPU. Hopefully with all the changes it'll mean the end of foot-long video cards and absurd air or liquid cooling being required for stability at stock clocks.

Saying that HBM1 will have more success than HBM2 at the lower end is much like saying LPDDR3 is more suited to low-end phones. The new generation is defined almost exclusively to obviate the need for the old one from end-to-end. Once you have volume crossover, the price on the older stuff goes way up because the memory vendors want to stop building it. HBM1 was essentially a proof of concept. HBM2 will likely be the standard across the board.

Durinia fucked around with this message at 16:40 on Mar 25, 2016

penus penus penus
Nov 9, 2014

by piss__donald

Durinia posted:

For a given GPU design point, you'll be able to choose different stack heights (2,4,8-high) to put into the card. It's also probably quite possible to only enable some of the designed-in HBM sites. So, if you have some GPU with 4 HBM sites, you could populate all of them for 8/16/32 GB capacity at 1 TB/s, or you could perhaps just populate half of them for 4/8/16 GB capacity at 512GB/s. You can look at this much like their different SM counts - they'd bin bonding yields into different level parts.

This is locked in by the GPU vendor, as it's all inside the package the board builders get. I'm not 100% sure how it works with GDDR5 and the board people - their choices would be pretty limited (site count/bus width is fixed) if they have any choice at all. G5 doesn't have a lot of capacity variation per chip.

Yeah there have been a few cards with twice the vram over reference in the past but its fairly uncommon. The 960 being the most notable one probably, but there was a 4gb 770, 6 gb 780, 6 gb 280x (...). I figured as much that HBM would be more or less locked in but I wasn't sure

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

MaxxBot posted:

Some manufacturers just put on an aftermarket cooler on the stock card design while some buy just the GPU and design both a custom PCB and cooler. Using different memory types wouldn't work but I'm not sure if different amounts has ever been done.

I'm pretty sure the number of chips is baked in by the number of lanes enabled on the memory bus. However, there have definitely been cases where the OEM used double-density chips to boost VRAM capacity. That's where you get 6 GB 780s and so on. Sometimes the GPU vendor will put their foot down over this though - nobody was allowed to make 6 GB 780 Tis, because that would have undercut sales of the GTX Titan.

The case I'm not entirely sure on is low-end GPUs - there's a lot of SKUs with different memory capacity and I don't know whether they do that by messing with the density or if they can just not populate chips on the board. At some point it might actually be cheaper to use fewer higher-density chips that are actually in volume production than to seek out super low-end parts that aren't commonly in production. You could tell if a vendor was doing this by a narrower memory bus and reduced bandwidth.

On the very bottom end they also have some models which use DDR3 instead, but again I think those are usually entirely separate chips. I had a DDR3 GT 640 and it actually had a different CUDA capability from the DDR5 version.

Paul MaudDib fucked around with this message at 18:29 on Mar 25, 2016

HMS Boromir
Jul 16, 2011

by Lowtax

THE DOG HOUSE posted:

Yeah there have been a few cards with twice the vram over reference in the past but its fairly uncommon. The 960 being the most notable one probably, but there was a 4gb 770, 6 gb 780, 6 gb 280x (...). I figured as much that HBM would be more or less locked in but I wasn't sure

ASUS and Gigabyte both have a 4GB 750 Ti. I have to assume they were the result of drunken dares.

Automata 10 Pack
Jun 21, 2007

Ten games published by Automata, on one cassette
Does the leaks on the X80 Ti imply that the card is actually arriving very soon?

penus penus penus
Nov 9, 2014

by piss__donald

Mutant Standard posted:

Does the leaks on the X80 Ti imply that the card is actually arriving very soon?

I believe we'll know April 4th-7th during the nvidia presentation but most leaks seem to suggest somethings coming out soon. Although based on the past I wouldn't hold my breath for a x80 ti to be among the first

Truga
May 4, 2014
Lipstick Apathy
So I know, rumours and all, but might as well:

http://www.bitsandchips.it/52-english-news/6785-rumor-pascal-in-trouble-with-asyncronous-compute-code

This would be pretty good I think, cheaper GPUs for everyone! :v:

Anime Schoolgirl
Nov 28, 2002

Truga posted:

So I know, rumours and all, but might as well:

http://www.bitsandchips.it/52-english-news/6785-rumor-pascal-in-trouble-with-asyncronous-compute-code

This would be pretty good I think, cheaper GPUs for everyone! :v:
:yum:

Though I wouldn't bet on the hardware scheduler being hosed, considering it's supposed to be their Kepler compute successor line and all.

Anime Schoolgirl fucked around with this message at 20:13 on Mar 25, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Anime Schoolgirl posted:

:yum:

Though I wouldn't bet on the hardware scheduler being hosed, considering it's supposed to be their Kepler compute successor line and all.

Even if they put the scheduler back in, it could still have big problems with context switching. That wouldn't hurt compute, since compute happens entirely in compute mode, but you couldn't flip between graphics mode and compute mode instantly.

It's also possible that the scheduler isn't as flexible as GCN's is. Maxwell pretty much hardcoded a single command queue when operating in graphics mode, whereas GCN has 8 queues that are designed to work-steal to fight bubbles in the command queue. Having a scheduler onboard doesn't mean that you necessarily have well-developed command queues. This could potentially affect async shading even if async compute works in general.

It sounds a bit fishy to me as well, but even if it's a good compute performer there are some additional things that need to happen to make async compute from a graphical context performant. It's possible Pascal falls down on some of them.

Paul MaudDib fucked around with this message at 20:55 on Mar 25, 2016

Adbot
ADBOT LOVES YOU

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

Mutant Standard posted:

Does the leaks on the X80 Ti imply that the card is actually arriving very soon?

I have a feeling that big chip Pascal will appen sometime soonish, if you like your graphics cards costing thousands of dollars and without any outputs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply