|
Rastor posted:I see your GDDR6 and raise you HMB3: That sound you just heard was Samsung jumping into a pool of HPC people's drool.
|
# ? Aug 23, 2016 19:27 |
|
|
# ? May 31, 2024 16:56 |
|
Haquer posted:https://www.reddit.com/r/Amd/comments/4y0a6j/rx_470_overclockundervolt_discussion/d6kcoqh?context=3 I thought this was pretty common, to be honest. I thought it was pretty hard to tell exactly where the timings change without a BIOS editor to tell you that information, though.
|
# ? Aug 23, 2016 19:31 |
|
Haquer posted:https://www.reddit.com/r/Amd/comments/4y0a6j/rx_470_overclockundervolt_discussion/d6kcoqh?context=3 Similar behavior is seen in the 1000 series too.
|
# ? Aug 23, 2016 19:51 |
|
Rastor posted:I see your GDDR6 and raise you HMB3: Wait, the article isn't clear is there a distinction between low cost low power HBM and HBM3? If so Navi makes way more sense then, just completely drop the GDDR5 controller and use HBM, HBM2, LCHBM or HBM3 depending on your scaling.
|
# ? Aug 23, 2016 20:01 |
|
I love cats posted:Chips can also become bigger in size to accommodate more transistors. Chips can grow larger without giant increases in power consumption if clockspeed is reduced. A chip with 5000 shaders running at 1GHz might consume less power than a chip with 2500 shaders running 2GHz, but performance is similar. Thermal management of the big, low clocked chip is probably easier because the heat is dissipated on a larger die (even if power consumption was identical). The manufacturer only gets half as many chips per wafer when building the 5000 shader chip. They get more defective chips when making 5000 shader chips. Add this up, and manufacturers prefer to sell hot clocked 2500 shader chips.
|
# ? Aug 23, 2016 20:50 |
|
Mikojan posted:a bit off topic but: 7nm is already farther than many people thought we would go, obviously there are physical limits on how small transistors can scale but I'd be very hesitant in defining some exact death date of CMOS because lots of smart people have done that already only to be proven wrong later.
|
# ? Aug 23, 2016 21:07 |
|
MaxxBot posted:7nm is already farther than many people thought we would go, obviously there are physical limits on how small transistors can scale but I'd be very hesitant in defining some exact death date of CMOS because lots of smart people have done that already only to be proven wrong later. The other thing is that the nm measurements are mostly meaningless at this point, they are a marketing thing rather than an actual measure of circuit dimensions. The only thing 7nm means is "next process after 10nm".
|
# ? Aug 23, 2016 21:41 |
|
Is http://semiengineering.com/ a reliable/useful/relatively unbiased source of news for this sort of industry news? It's a suspiciously nice website and the lack of ads is surprising
|
# ? Aug 23, 2016 22:19 |
|
Rastor posted:The other thing is that the nm measurements are mostly meaningless at this point, they are a marketing thing rather than an actual measure of circuit dimensions. If that's the case, why didn't NVidia insist on their current process being called 14nm instead of 16nm? I've already seen people assume that the AMD stuff is better because it says 14nm on the box.
|
# ? Aug 23, 2016 22:19 |
They dont own the 16nm process. Nvidia is fabless.
|
|
# ? Aug 23, 2016 22:22 |
|
Rastor posted:The other thing is that the nm measurements are mostly meaningless at this point, they are a marketing thing rather than an actual measure of circuit dimensions. Would the theoretical 7nm be equal to say an Intel 10nm? Samsung 10nm be equal to Intel 14nm?
|
# ? Aug 23, 2016 22:33 |
No? I mean, the name basically comes from the smallest gate length that is possible. The various dimensions are trade secrets, except for what analysis firms have determined from cutting chips open, and there are many different dimensions such as interconnect width, height, gate width, fin dimensions, pitches, and a bunch more.
|
|
# ? Aug 23, 2016 22:38 |
|
It's not 100% meaningless, but consider for example: Apple sourced the A9 from the 16nm TSMC process and the 14nm Samsung process. 14 is 12.5% smaller than 16. So the 14nm should have been noticeably better right? But in reality the maximum variance was like 3%. Some people thought the 14nm chip produced worse battery life than the 16nm chip. There's lots and lots of differences between the available processes that aren't captured by the nm measurement.
|
# ? Aug 23, 2016 22:49 |
|
PerrineClostermann posted:Similar behavior is seen in the 1000 series too.
|
# ? Aug 23, 2016 23:04 |
|
Rastor posted:But in reality the maximum variance was like 3%. Some people thought the 14nm chip produced worse battery life than the 16nm chip. It does. Samsung/GloFo's process is total crap compared to TSMC's and Intel's processes.
|
# ? Aug 23, 2016 23:24 |
|
I don't know anything about HBM, but would it be hypothetically possible to move away from PCI boards when it gets widespread? Just have the chip drop into an additional socket on the motherboard, and throw your own cooler on it like a CPU. Of course some additional power would have to be directed through the motherboard with the necessary VRMs in place.
|
# ? Aug 24, 2016 00:26 |
A ~3% Delta in battery life between the two in the majority of circumstances is hardly "total crap". Remember, finfets are still only like 13 years old, and there is still alot of research to find the most optimal process steps. Hell in terms of gpu yield, tsmc is likely total crap if the rumors are true, and Nvidia is rumored to be using samsung for lower end Pascal die shrinks.
|
|
# ? Aug 24, 2016 00:36 |
|
HalloKitty posted:I thought this was pretty common, to be honest. RAM timings needing to go looser as clock increases has been a thing for.. as long as OCing ram has been a thing? Like back to the mega-voltage B5-chip Original DDR times, even.
|
# ? Aug 24, 2016 02:09 |
|
Watermelon Daiquiri posted:A ~3% Delta in battery life between the two in the majority of circumstances is hardly "total crap". Remember, finfets are still only like 13 years old, and there is still alot of research to find the most optimal process steps. Hell in terms of gpu yield, tsmc is likely total crap if the rumors are true, and Nvidia is rumored to be using samsung for lower end Pascal die shrinks. The rumors about TSMC yields being poor seem to be false. The Steam Survey last month had the 1070's and 1080's rising up the ranks real quick while the RX 480 was barely represented. The rumor now is that Polaris are actually the GPU's that are suffering the poor yields.
|
# ? Aug 24, 2016 02:39 |
|
Icept posted:I don't know anything about HBM, but would it be hypothetically possible to move away from PCI boards when it gets widespread? Just have the chip drop into an additional socket on the motherboard, and throw your own cooler on it like a CPU. Of course some additional power would have to be directed through the motherboard with the necessary VRMs in place. Yes, this kind of thing is called a mezzanine connector and nVidia is already pursuing it. http://www.pcgamer.com/nvidia-pascal-p100-architecture-deep-dive/
|
# ? Aug 24, 2016 03:17 |
|
Beautiful Ninja posted:The rumors about TSMC yields being poor seem to be false. The Steam Survey last month had the 1070's and 1080's rising up the ranks real quick while the RX 480 was barely represented. The rumor now is that Polaris are actually the GPU's that are suffering the poor yields. Haven't the 10** series GPUs been out for a month more than the RX 480? makes sense that things would still be ramping up.
|
# ? Aug 24, 2016 04:22 |
|
In stock at the moment (all MSI on Newegg) RX470 4GB: $200 RX470 8GB: $240 RX480 4GB: $250 Oh god which do i buy None, because I'm about to stop gaming for a month next week.
|
# ? Aug 24, 2016 06:31 |
|
Craptacular! posted:In stock at the moment (all MSI on Newegg) >:[
|
# ? Aug 24, 2016 06:41 |
|
For a real answer, I'd probably go with the RX 470 8GB. It's not so much further behind the RX 480 to justify paying 10 bucks more for half the VRAM. 250 bucks is way too much for a 4GB RX 480, it should be 200, which is the current price of the 4GB RX 470's.
|
# ? Aug 24, 2016 06:47 |
|
I have a tiny case and wanna get an RX 480 when things settle down. If I want to get one with a blower cooler, are the reference cards fine or do OEMs usually come out with a better design?
|
# ? Aug 24, 2016 06:52 |
|
beepsandboops posted:I have a tiny case and wanna get an RX 480 when things settle down. If I want to get one with a blower cooler, are the reference cards fine or do OEMs usually come out with a better design? Well it can't be worse than the reference, since the reference one is about as cheap a blower cooler as I've seen in recent history.
|
# ? Aug 24, 2016 07:00 |
|
Craptacular! posted:In stock at the moment (all MSI on Newegg) Get an aftermarket 1060 for $250. It's 10-15% faster than a RX480, and has 6GB for the price.
|
# ? Aug 24, 2016 07:18 |
|
Rastor posted:Yes, this kind of thing is called a mezzanine connector and nVidia is already pursuing it. That's cool, although it might be trouble for AIB partners that haven't diversified enough.
|
# ? Aug 24, 2016 07:41 |
|
BurritoJustice posted:Get an aftermarket 1060 for $250. It's 10-15% faster than a RX480, and has 6GB for the price. Over here at Singapore the cheapest RX 470 8GB costs more than a Palit 1060 6GB. So after the hype dust has settled, AMD still gets the shaft.
|
# ? Aug 24, 2016 08:31 |
|
I LIKE TO SMOKE WEE posted:Is http://semiengineering.com/ a reliable/useful/relatively unbiased source of news for this sort of industry news? It's a suspiciously nice website and the lack of ads is surprising Semi engineering does the best reporting in the business right now. They mostly talk about high-level stuff, talking about the state of the industry rather than the latest chips, but anyone curious about the challenges of actually making transistors should pop in and read their articles once every couple months. To pick up the discussion from before: the main challenge to newer and smaller process nodes (beyond 7nm, especially) is mass production. The cost to design SoCs is skyrocketing, 14nm is triple of the cost of 28nm, and 7nm is expected to triple design costs once again. Each wafer takes longer to process due to increasingly complicated fabrication, not to mention you need to invest in $10 billion fabs to make them. That's already affecting the industry. GloFlo is skipping 10nm is because they expect lower demand for it, since a lot of firms are deciding they can't afford the design costs to be on the bleeding edge.
|
# ? Aug 24, 2016 11:49 |
|
Icept posted:That's cool, although it might be trouble for AIB partners that haven't diversified enough. For gamers and the AIB partners that cater to them nothing is going to change for a very long time, the PCI SIG is already working on a PCIe 4.0 standard to carry the PCIe connector years down the road. Here's the Anandtech writeup on the nVidia server with the mezzanine connector: http://www.anandtech.com/show/10229/nvidia-announces-dgx1-server The key phrase is the GPUs connect back to their x86 CPU host over standard PCI Express. Until such time as nVidia convinces Intel to implement their NVLink system into their processors/chipsets, the communication method between CPU and GPU will be PCI Express, and that means PCIe connectors will suffice for people with only one or two GPUs. It's not impossible that Intel might implement NVLink, but it is surely very low on their priorities.
|
# ? Aug 24, 2016 12:08 |
|
With that mezzanine stuff though, if it does get to consumer grade motherboards, I bet you anything that we will be looking at Nvidia motherboards and ATI motherboards and nothing will be interchangeable. Want to change brands, well that's a new GPU chip and MB combo, oh and the Nvidia will be 50% more expensive.
|
# ? Aug 24, 2016 12:43 |
|
beepsandboops posted:I have a tiny case and wanna get an RX 480 when things settle down. If I want to get one with a blower cooler, are the reference cards fine or do OEMs usually come out with a better design? AMD reference blowers are a huge pile of garbage, so I would stay away from that.
|
# ? Aug 24, 2016 15:33 |
|
You could undervolt&clock the crap out of it to make it livable, and limit fan speed to something like 40%. I don't know of any manufacturer that makes upmarket blowers except Nvidias vapor chamber for x80/ti/Titans. Maybe an AIO watercooled one?
|
# ? Aug 24, 2016 16:23 |
|
Gwaihir posted:AMD reference blowers are a huge pile of garbage, so I would stay away from that. quote:The Radeon RX 480 does really well under load. During our gaming loop, it’s no louder than Nvidia's reference GeForce GTX 1070. This is in spite of its higher power consumption, simpler cooling solution and more mainstream construction. http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-10.html Do you also think that Nvidia's FE coolers are crap? They both throttle the cards without a fan curve adjustment and they both produce around the same dB (A).
|
# ? Aug 24, 2016 20:04 |
Tanreall posted:http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-10.html https://www.techpowerup.com/reviews/AMD/RX_480/23.html And here we see a 5dB difference which is a fairly large one as far as perceived noise goes.
|
|
# ? Aug 24, 2016 21:48 |
|
Gwaihir posted:AMD reference blowers are a huge pile of garbage, so I would stay away from that. I have the RX 480, I love it, but yes the reference cooler was a very annoying timbre. I spent $20 to get an Arctic Accerlero S1 passive cooler, and zip-tie a single 140mm fan to that, and now I can run the ran at an inaudibly low RPM and it stays below 70c at full overclocked load. Looks goofy but it's cheap and completely silent... posting again for posterity since this pretty much will get incredible results on anything 200 watts tdp or less:
|
# ? Aug 24, 2016 21:54 |
|
|
# ? May 31, 2024 16:56 |
|
Mea culpa.
|
# ? Aug 24, 2016 22:50 |