|
Rastor posted:Doesn't seem hard, just have a power connector on the motherboard and then some fat traces to the PCIe connector. Yeah but they do it through 10G cables, not a PCIe connector with pins half the size of a toothpick.
|
# ? Aug 22, 2016 03:13 |
|
|
# ? May 29, 2024 16:31 |
|
wargames posted:so no more 6 pins or 8 pins yay! Yeah, all I read from that is "300W from the slot now? Okay, let's make a 450W card."
|
# ? Aug 22, 2016 03:15 |
|
BIG HEADLINE posted:Yeah, all I read from that is "300W from the slot now? Okay, let's make a 450W card." I for one welcome our new 3.5 slot cooler overlords.
|
# ? Aug 22, 2016 03:35 |
|
I ran into a major issue upgrading to Windows 10 anniversary and the latest Nvidia drivers for my 1080 where during the driver installation, the screen just glitched out to random artifacts. Rebooting didn't work so I had to restart in safe mode and restore to pre Anniversary and pre latest set of drivers. DDU didn't help and Google doesn't seem to show that this is a common issue. Kinda scratching my head on this
|
# ? Aug 22, 2016 04:04 |
|
Does it also mean that PCIe 4.0 won't be backwards compatible with PCIe 3.0 cards and vice versa?
|
# ? Aug 22, 2016 04:24 |
|
BIG HEADLINE posted:Yeah, all I read from that is "300W from the slot now? Okay, let's make a 450W card." Just in time for a node where I'm not actually sure they can make an affordable consumer card make meaningful use of 300W.
|
# ? Aug 22, 2016 04:27 |
|
xthetenth posted:Just in time for a node where I'm not actually sure they can make an affordable consumer card make meaningful use of 300W. O ye of little faith in AMD. You think they put all that effort into the Fury X's liquid cooler not to use it again?
|
# ? Aug 22, 2016 04:30 |
|
Twerk from Home posted:O ye of little faith in AMD. You think they put all that effort into the Fury X's liquid cooler not to use it again? I loving hope they're going to use it again.
|
# ? Aug 22, 2016 04:31 |
|
I can't imagine what kind of monstrosity you'd need to come up with to make a 450w card off a 14nm GPU. You'd basically need 2 Titan XP's on one chip as that card is only 250w on its own.
|
# ? Aug 22, 2016 04:31 |
|
Beautiful Ninja posted:I can't imagine what kind of monstrosity you'd need to come up with to make a 450w card off a 14nm GPU. You'd basically need 2 Titan XP's on one chip as that card is only 250w on its own. Titan Black X2. $3000, 7,168 Cuda cores. Go Quad-Sli for your triple 4k setups.
|
# ? Aug 22, 2016 04:34 |
|
Is there a passively-cooled (and ideally slot powered) Nvidia GPU on the market that's worthwhile as a PhysX card or is that driver option purely theoretical? The new Deus Ex has cloth physics and maybe it could boost Hairworks?
|
# ? Aug 22, 2016 05:34 |
|
Shumagorath posted:Is there a passively-cooled (and ideally slot powered) Nvidia GPU on the market that's worthwhile as a PhysX card or is that driver option purely theoretical? The new Deus Ex has cloth physics and maybe it could boost Hairworks? Whole lot of nope there. My understanding is that if you use a PhysX GPU that's much slower than your main GPU, it just ends up bottlenecking the whole thing down anyway. Anything that would be passively cooled would introduce a huge bottleneck in your system, the fastest seems to be a GT 730, which is basically a card that exists for people who don't have functional integrated graphics.
|
# ? Aug 22, 2016 05:38 |
|
SwissArmyDruid posted:PCIe 4.0 (v.7 of the spec) to have 16 GT/s and 300W from the slot. And custom psi cable makers just let out a collective scream.
|
# ? Aug 22, 2016 05:59 |
|
Can someone explain what's going on with overclocking 1070 and 1080 cards in terms of Nvidia somehow limiting voltage OC? I OC'd my Gigabyte G1 1070 but the performance gains were pretty marginal. There doesn't really seem to be a point to OC the card at the moment. I looked into it a little bit, and apparently Nvidia limits how much you can overvolt the card which ends up limiting how much you can bump up the clock speed. Even with the card being maxed out before stability became an issue, I was getting around 65-73C temps which was still very much in the safe zone. Any reason why this is happening? I'm guessing it has something to do with stopping people from just buying 1070 cards and OCing them to 1080 specs so no one feels a need to pay a grand to buy a loving 1080. Is there any way around Nvidia's hard limit on overvolting the card?
|
# ? Aug 22, 2016 06:14 |
Avalanche posted:Can someone explain what's going on with overclocking 1070 and 1080 cards in terms of Nvidia somehow limiting voltage OC? No one knows why Nvidia limited it like that, though I really doubt it's to keep people from OCing the 1070 up to 1080 level as I very much doubt that is anywhere close to possible, plus if you could there is no reason why you could not OC the 1080 that much further, the fundamental difference in CUDA cores stays the same. And no, there is no way around it right now since the BIOS/Firmware is signed.
|
|
# ? Aug 22, 2016 06:25 |
|
Nvidia has always limited the voltage, everybody does. Its just this time they're shipping closer to that limit than before. A part of me thinks its just because they can (far fewer wattage concerns on 16nm). The short of it is all cards seem to consistently hit around the same limits - consider the fact this is a good thing with some immediate annoying consequences. We've built expectations around overclocking since it shifted from a no no to a marketable feature, but we must not forget that ideally we wouldn't have to overclock at all to get the full use of the chip. A boring future for sure, and one that AIBs won't be super happy about but we shouldn't really be upset about this direction we are heading into. At the end of the day its better for a product to basically "OC" itself. Now to the darker side of things , most voltage limits are set to prevent damage. To go beyond those limits is what overclocking truly is, and just like before no company is going,to support that officially. Once someone breaks the hard lock on voltage control then we will see what it can do, but that's obviously uncharted territory and not one id jump into immediately until you see others blow up their cards. And to the dark part of it, the limit in this case may be set intentionally low for a 11 series refresh. But that's just speculation at best. Once someone figures out how to circumvent the hard limit we will know then.
|
# ? Aug 22, 2016 06:58 |
|
Smarf posted:What driver are you using on the monitor? At this stage I'm guessing it's more of a Crimson driver issue than anything. I'm using the beta driver on the monitor (the one that adds 'freesync' after the name). And yes, it is a driver issue more than a thing with the monitor.
|
# ? Aug 22, 2016 07:39 |
|
Billa posted:I'm using the beta driver on the monitor (the one that adds 'freesync' after the name). And yes, it is a driver issue more than a thing with the monitor. Could you like me to that specific driver? The one I'm using from their website shows my monitor with a G instead of the PF in the model name, seems really odd.
|
# ? Aug 22, 2016 07:50 |
|
BIG HEADLINE posted:Yeah, all I read from that is "300W from the slot now? Okay, let's make a 450W card." Real talk though, you could power a Titan XP off that, and power consumption is only ever going to go down. Hopefully GloFo doesn't get stuck on 14nm like they did with 28nm. (AHAHAHAHAHA yes they are, just watch them, they said they were going to skip 10nm entirely and go straight to 7nm. ) Moore's Law (or some similar analogue/derivative thereof) at work, and all that. I don't think Titans have ever used more than a single 8-pin and a 6-pin, so. ColHannibal posted:And custom psi cable makers just let out a collective scream. Well, at least they can't ever take away the ATX and EPS cables away from guys like Cablemod, so I think they'll be okay. SwissArmyDruid fucked around with this message at 07:58 on Aug 22, 2016 |
# ? Aug 22, 2016 07:52 |
|
How can it be backwards compatible, though? I'm guessing all cards will still come with PCIe power connectors, and check in the early stages of boot if they're hooked up to PCIe 4, then go nuts and pull all the power through the slot (which sounds dodgy to me, those little fingers in the slot were never enough before, and now they're suddenly fine for ludicrous power? Seems odd).
|
# ? Aug 22, 2016 07:57 |
|
Smarf posted:Could you like me to that specific driver? The one I'm using from their website shows my monitor with a G instead of the PF in the model name, seems really odd. https://www.youtube.com/watch?v=9GrYpQeTNks (you have the driver on the show info of the video)
|
# ? Aug 22, 2016 08:16 |
|
Billa posted:https://www.youtube.com/watch?v=9GrYpQeTNks Perfect, thank you. I'm going to assume our monitors are fine and it'll be fixed with a driver update in the future.
|
# ? Aug 22, 2016 08:25 |
|
BIG HEADLINE posted:Yeah, all I read from that is "300W from the slot now? Okay, let's make a 450W card." You can tell nobody read the article because they actually say the real number might end up being 500 or higher
|
# ? Aug 22, 2016 08:39 |
HalloKitty posted:How can it be backwards compatible, though? I'm guessing all cards will still come with PCIe power connectors, and check in the early stages of boot if they're hooked up to PCIe 4, then go nuts and pull all the power through the slot (which sounds dodgy to me, those little fingers in the slot were never enough before, and now they're suddenly fine for ludicrous power? Seems odd). They can change the contacts, make them thicker, use lower resistance materials, etc.
|
|
# ? Aug 22, 2016 08:47 |
|
HalloKitty posted:How can it be backwards compatible, though? I'm guessing all cards will still come with PCIe power connectors, and check in the early stages of boot if they're hooked up to PCIe 4, then go nuts and pull all the power through the slot (which sounds dodgy to me, those little fingers in the slot were never enough before, and now they're suddenly fine for ludicrous power? Seems odd). One of two ways: * A longer PCIe slot, and stick the extra power pins at the end. Mechanically, at present, there's nothing stopping you from plugging an x1 or x4 card into a full-length slot. Similarly, any currently-existing PCIe cards would not be long enough to reach those extra power pins. * They do some negotiation between the card and the PCIe controller before turning on the full juice. Kind of like a smartphone using Quick Charge, but for PCIe. The actual x1-x16 connectors still are not (and have never been) a hotplug interface, after all. I LIKE TO SMOKE WEE posted:You can tell nobody read the article because they actually say the real number might end up being 500 or higher No, you know that nobody read the article because nobody mentioned the FREAKING STANDARDIZED EXTERNAL PCIE CABLE. Which I'm not exactly happy about because it just takes all the work that Intel and AMD and Razer put into external docks and throws it all back up into the air with uncertainty all over again. SwissArmyDruid fucked around with this message at 09:08 on Aug 22, 2016 |
# ? Aug 22, 2016 08:59 |
|
SwissArmyDruid posted:One of two ways: As someone who works at a company who designs pcba's when you get plastic connectors that big the failure rate goes through the roof.
|
# ? Aug 22, 2016 09:04 |
|
Does this mean in the future we'll be buying motherboards based on their PCIe VRM quality and number of power phases in order to achieve good GPU overclocks?
|
# ? Aug 22, 2016 09:20 |
|
You guys are crazy. The 400+ watt limit for PCIe will never make it down to consumer motherboards. Adding that much extra power circuitry to every Walmart special PC just on the off chance someone will buy a top end video card for it is a complete non-starter. The target market is servers which need to run multiple high end compute cards* inside a rack case where space and cooling constraints make additional per card cabling very problematic. Consumer PCs will get 4.0 signalling someday, but it'll be a variant of the spec with lower power requirements for the slot. * this also applies to desktop virtualization setups that use the same cards in servers design. That external connector also seems destined for problems. It looks like they've stuck the optical transceiver inside the plug on the cable, which will make the cable very expensive and very fragile. Not being durable enough for daily cycling means it's dead in the consumer space, and it faces very stiff competition from other established interfaces in the server space.
|
# ? Aug 22, 2016 09:25 |
|
I really hope they don't effectively obsolete PCIe 3.0 for anything stronger than GTX *50 / RX *60 level cards. I'm looking to keep my current CPU+mobo combo for long enough that I might actually run afoul of the new standard if they stop making cards with 6 pins and while I'm a noted low resolution idiot I'd at least like the option to move up once I have the inclination and finances.HalloKitty posted:How can it be backwards compatible, though? I'm guessing all cards will still come with PCIe power connectors, and check in the early stages of boot if they're hooked up to PCIe 4, then go nuts and pull all the power through the slot HMS Boromir fucked around with this message at 09:45 on Aug 22, 2016 |
# ? Aug 22, 2016 09:41 |
|
Hey cool, up 50 cents on the day after the Zen demo. Things are looking u- GlobalFoundries Will Allegedly Skip 10nm and Jump to Developing 7nm Process Technology In House -p. RIP AMD. Got their poo poo sorted out, but was then strangled to death by the albatross of GloFo dragging them back to the bad old days of 20nm.
|
# ? Aug 22, 2016 11:23 |
|
isn't that precisely the reason why amd contracted samsung directly for fabrication recently
|
# ? Aug 22, 2016 11:38 |
|
Doesn't change the fact that AMD has their balls stapled to GloFo for a minimum number of wafers/year. Look at how well that arrangement is working out now: AMD can't get good yields on 480s, GloFo are continually being blamed by the talking heads and people like us. It hurts AMD trying to compete, and it hurts GloFo when trying to get new sales. SwissArmyDruid fucked around with this message at 11:57 on Aug 22, 2016 |
# ? Aug 22, 2016 11:41 |
|
SwissArmyDruid posted:Doesn't change the fact that AMD has their balls stapled to GloFo for a minimum number of wafers/year. Look at how well that arrangement is working out now: AMD can't get good yields on 480s, GloFo are continually being blamed by the talking heads and people like us. I just see it as GloFlo not having the resources to do both 10nm and 7nm. At least by focusing only on 7nm they can get a bit of a head start. Long term if they are going to be successful then they are going to have to live on the bleeding edge of tech at least for a little while, even if they sacrifice a couple years of 10nm to do it.
|
# ? Aug 22, 2016 12:53 |
|
Ninkobei posted:I just see it as GloFlo not having the resources to do both 10nm and 7nm. At least by focusing only on 7nm they can get a bit of a head start. Long term if they are going to be successful then they are going to have to live on the bleeding edge of tech at least for a little while, even if they sacrifice a couple years of 10nm to do it. Yeah, but skipping isn't doing so well for AMD in their case in the past. They skipped 20nm to wait for 14nm, which to go from 28nm to 14nm took 4 years. When Nvidia waited for 16nm(4 Years) they at least released performing 28nm cards. To skip 10nm to go to 7nm means they could be waiting for another 2-3 years before they have a working GPU on the new process. Also, Nvidia maximized their 28nm process by basically increasing die size for each generation. AMD took ~2 years to go from a max die size of 352mm^2 on HD 7970 to 438mm^2 on the R9 290X SlayVus fucked around with this message at 14:24 on Aug 22, 2016 |
# ? Aug 22, 2016 14:18 |
|
EdEddnEddy posted:That's close to the setup I used to Play Crysis the first time. Tri SLI 8800GTX's on a Overclocked E6600. It was playable at ultra until you got to the final boss and it sort of memory leaked all over its face. I almost want to give Crysis a whirl with my overclocked i7-6700k + GTX 1070 set-up to see what happens, but I don't want to financially reward them for releasing a horribly unoptimized game.
|
# ? Aug 22, 2016 15:05 |
|
Buy it on humble bundle, set 100% to charity
|
# ? Aug 22, 2016 15:12 |
Anime Schoolgirl posted:isn't that precisely the reason why amd contracted samsung directly for fabrication recently Huh? Are you sure you aren't thinking of Nvidia's pascal shrink?
|
|
# ? Aug 22, 2016 15:12 |
|
EdEddnEddy posted:That's close to the setup I used to Play Crysis the first time. Tri SLI 8800GTX's on a Overclocked E6600. It was playable at ultra until you got to the final boss and it sort of memory leaked all over its face. AMD still slaps around nVidia in Crysis, that's one of handful of games where the 290/290X really spank the 970. I'd assume the relationship still exists between the RX 480 and the 1060 based on the 480's higher memory bandwidth, but review sites finally stopped testing Crysis: Warhead about 2 years ago.
|
# ? Aug 22, 2016 15:13 |
|
SlayVus posted:Also, Nvidia maximized their 28nm process by basically increasing die size for each generation. AMD took ~2 years to go from a max die size of 352mm^2 on HD 7970 to 438mm^2 on the R9 290X Fury X came out in 2015 and was 596 mm2. And there is a gap of 3.5 years from the release of 7970 to Fury X.
|
# ? Aug 22, 2016 17:30 |
|
|
# ? May 29, 2024 16:31 |
|
Twerk from Home posted:AMD still slaps around nVidia in Crysis, that's one of handful of games where the 290/290X really spank the 970. I'd assume the relationship still exists between the RX 480 and the 1060 based on the 480's higher memory bandwidth, but review sites finally stopped testing Crysis: Warhead about 2 years ago. It's also a drat pretty game even today. Also the Mechwarrior Living Legends mod is still fantastic looking, and more fun then MWO soo.. I need to go back and play those games again. It's been a while.
|
# ? Aug 22, 2016 18:10 |