Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Llamadeus
Dec 20, 2005
2K and 4K refer to horizontal resolutions (2048 and 4096) because they originate in film scanning: it makes more sense to standardize scanning resolutions by the width of the film.

Adbot
ADBOT LOVES YOU

Llamadeus
Dec 20, 2005

jonathan posted:

Why are the centrifugal fan/blower cards considered worse ? They don't seem to recirculate any of the warm air back into the case. Is it just a matter of them being noisy ?
The heatsinks on the blower designs are a lot smaller physically as well, so that single fan has to do more work than all 2-3 fans of an open card. The benefit of not recirculating air generally doesn't offset this in cases with a even little bit of air flow or ventilation.

Llamadeus
Dec 20, 2005
My AMD card self-reports at ~35W idle (at 300 MHz too, so it's probably mostly VRAM?). I've read that having two monitors of differing resolution increases the idle power significantly, no idea how true this is though.

Llamadeus
Dec 20, 2005

Zedsdeadbaby posted:

I thought ray tracing was supposed to be virtually impossible to do in realtime. Mind you, I read that around 10-15 years ago. Is it just the actual existence of raw computing power that's making it possible now?
A path-traced Quake 2 (without any denoising): https://www.youtube.com/watch?v=x19sIltR0qU (youtube's compression doesn't do it much good but there's a link to the source video)

Dadbod Apocalypse posted:

For the olds with ancient aging bodies among us, can someone explain what ray-tracing brings to the gaming table that's different from what's going on now?

Please explain as if you were talking to a three year old.
Modern real-time rendering uses a large amount of hacks and approximations to get something vaguely approaching realism. Path tracing is like a ground up approach to rendering a scene, essentially building it up photon by photon, so it gets a lot of realistic lighting effects right without any extra effort.



The softening of the shadow here as it extends further out would be expensive to implement in current engines, for instance. In practice I think the most immediate benefit of path tracing is more accurate reflections, since current games use either a cubemap or some screen space effect, both of which can look pretty janky.

Llamadeus
Dec 20, 2005
You can do both at once: https://en.wikipedia.org/wiki/Path_tracing#Bidirectional_path_tracing

Llamadeus
Dec 20, 2005

Alpha Mayo posted:

All you lose with AMD right now is single-threaded performance, which is important for gaming and almost nothing else.
I don't think this is necessarily true, there's lots of software with single thread bottlenecks out there. Also iirc Intel's architecture also currently holds a significant advantage in AVX workloads, so the gap in many multi-threaded workloads is larger than what Cinebench for example would suggest.

https://www.anandtech.com/show/11859/the-anandtech-coffee-lake-review-8700k-and-8400-initial-numbers/10
https://uk.hardware.info/reviews/76...4-x265-and-flac

Using video encoding as an example: in the x264 tests Intel 6c/6t is about equal to Ryzen 6c/12t, in x265 (which I think leans even more on AVX) the 6c/6t i5s can even match the 8c/16t Ryzens.

Llamadeus
Dec 20, 2005

orcane posted:

The human eye is not a camera though, so DoF effects and motion blur are 100% about games trying to be movies. So yeah, art.
The specific DoF effect games emulate is specifically a camera aperture, but the human eye definitely is a camera in this sense as well: it has a lens and aperture.

There are good reasons to have motion blur in video that are unrelated to movies: it's arguably less correct to treat each pixel as a point sample in the temporal dimension, and conventional 180 degree shutter angle is a good compromise between sharpness and coverage. Movies have the option to reduce motion blur too, but they don't because less blur looks unnaturally jerky.

Llamadeus
Dec 20, 2005

DrDork posted:

More like because everyone is conditioned to how movies at 24fps look, so anything higher "looks wrong" despite the higher rates actually being closer to how the eye would normally perceive and register moving scenes.
Increasing frame rate does result in a more realistic video, but the thing I'm describing is shutter angle at given frame rate.

For a visual example: https://www.youtube.com/watch?v=xn1mpszEjZM

Llamadeus
Dec 20, 2005

1gnoirents posted:

Its probably out there but this is a pretty stark change from seriously 1 year ago where you'd almost always see sandy bridge thrown into benchmarks when reviewing the newest CPU on at least some major sites. Now its hard to even find Haswell.
Here's one with Sandy Bridge and Haswell: https://www.techspot.com/review/1546-intel-2nd-gen-core-i7-vs-8th-gen/

Llamadeus
Dec 20, 2005
That depends on what's meant by foreseeable I think? The next gen of consoles are expected to release in 2020, and I can imagine games needing more than a 1060 at 1080p60 after that (if you want all the fancy new effects and stuff)

Llamadeus
Dec 20, 2005

buglord posted:

Is there a ray tracing/ no ray tracing comparison video because all my smooth brain sees in these demos are just better shadows or reflections or something
Nvidia's demo from a few months back: https://www.youtube.com/watch?v=tjf-1BxpR9c

Better reflections and shadows (and occlusion) are basically the whole point of it~

Llamadeus
Dec 20, 2005

Partial Octopus posted:

So it seems like most 3-fan cards are in the 300mm range. My case can fit a max of around a 286mm card. Will I lose much performance if I go with a shorter card with fewer fans? Or is it just going to be a bit louder? Is it worth changing my case to just accommodate larger cards?
I'd expect most two fan coolers to be at least adequate, especially from companies that generally don't skimp on cooling like EVGA.

There are some outliers like MSI's Armor cooler where they tried to save too much money on the heatsink, or small form factor cards that intentionally give up performance to fit in a smaller size.

Llamadeus
Dec 20, 2005
The FE designs do look sorta cool

Llamadeus
Dec 20, 2005
It's kinda telling how many of the games demoed so far do only one effect at a time. Shadows in Tomb Raider, reflections in Battlefield, diffuse GI for Metro.

Control looks likes it uses all of the above at once, though. I'm guessing it's even less performant in real time.

Llamadeus
Dec 20, 2005

BIG HEADLINE posted:

It's examples like this that prove something definitive - if you're playing a competitive shooter, you're going to play with RTX off simply because of better visibility. Who cares if it means washed out, unnatural-looking scenes if it's the difference between getting a kill and getting killed.
That's obviously a demo, you'd likely adjust the ambient lighting without RTX to match it in overall brightness. Or even put it at a slight penalty if you wanted to encourage people to use it for some reason.

Llamadeus
Dec 20, 2005
Quake 3 must be the earliest example of this, it had a console command to turn down texture resolution to basically nothing... which became standard in competitive play.

Llamadeus
Dec 20, 2005
Also standard in competitive play: forcing other players to render with the biggest, most visible model. Coloured bright green:

Llamadeus
Dec 20, 2005
Channel Well Technology does the RMx line.

Llamadeus
Dec 20, 2005

Zero VGS posted:

My general impression was that "per transistor cost may increase" means in practice that a 2080 chip costs Nvidia $32 instead of $30 to manufacture, or something to that effect. Would the chip wholesale price difference of a couple bucks really scale exponentially to the MSRP? I would think MSRP is entirely based on what the market will bear and how much the competition is undercutting you.
I think he's mostly referring to 7 nm and beyond here in terms of actual die shrinks.

https://semiengineering.com/big-trouble-at-3nm/ - this article suggests that design costs alone are expected to increase by the hundreds of millions per chip:

quote:

Design costs are also a problem. Generally, IC design costs have jumped from $51.3 million for a 28nm planar device to $297.8 million for a 7nm chip and $542.2 million for 5nm, according to IBS. But at 3nm, IC design costs range from a staggering $500 million to $1.5 billion, according to IBS. The $1.5 billion figure involves a complex GPU at Nvidia.
And the actual cost of developing and fabbing 7nm (let alone 5, etc) are high enough that Global Foundries just decided to cut their losses, leaving TSMC with something close to a monopoly. So in the absence of real competition it makes a lot of sense to offer the bare minimum in performance/price increases to get the most out of expensive, slowing node improvements.

Llamadeus fucked around with this message at 03:07 on Aug 31, 2018

Llamadeus
Dec 20, 2005
And the TU104 cards (2070 & 2080) are going to be 8 GB too, you'll have to go up to a 1080 Ti or 2080 Ti to get 11 GB.

Llamadeus
Dec 20, 2005

AVeryLargeRadish posted:

What you outlined is reasonable, but it's also not what I was complaining about. I'm seeing people say that Nvidia should not release cards with new tech unless there is software on the market that can use that new tech, to me at least that is ridiculous, someone has to take the first step and hardware makers are the ones that make the most sense for that.
Expecting some software that makes use of your new hardware to be ready on launch day is a perfectly reasonable expectation.

Like, nobody releases game consoles with zero launch games for the same reason.

Llamadeus
Dec 20, 2005

AVeryLargeRadish posted:

Yes, it is an unreasonable expectation on the PC. Consoles are completely different, you can't play a console game at all without the console, you're basically saying that developers should put resources into features that might be used at some point in the future and no dev in their right mind will do that when there are other things they could put their resources towards that see immediate benefit now.
The immediate benefit for the developer is that Nvidia pays them money to implement RTX features, eg Shadow of the Tomb Raider and Metro Exodus.

Nvidia just decided to launch the cards before a single implementation was ready.

Llamadeus
Dec 20, 2005

Zero VGS posted:

It definitely smears stuff in some of those shots. In motion it does seem hard to notice. However there's sustained shots in that video where it's the same or even a frame or two slower than TAA, so while much faster on average, it's not exactly consistent. I would like to see a comparison video like that, but showing DLSS, TAA, and no aliasing at all, so I can see how accurate they're both getting to the reference material.
I think the fair way to compare DLSS to TAA would be to take the speed difference into account and compare DLSS with TAA at whatever resolution gives similar framerates, preferably with something like UE4's temporal upscaling.

Llamadeus
Dec 20, 2005

K8.0 posted:

Real-time TAA still always looks like dogshit, which is specifically why Nvidia chooses it as a comparison point. When we can compare DLSS to non-TAA images, then we'll see how it actually looks, and whether it makes 4k monitors a worthwhile purchase when worthwhile ones come out, or whether it's better to stick with 1440.
Good temporal AA looks vastly better than any spatial method at comparable speeds and that's the reason every other method has been phased out in console games

Llamadeus
Dec 20, 2005
It's worth pointing out that PSNR is not necessarily an effective way to judge encoding quality across different implementations. x264 contains a few optimizations that worsen PSNR and SSIM but help a lot in improving perceptual quality.

Llamadeus
Dec 20, 2005
Noctua do have black versions of their fans as well, as well as fully black versions of the D15 etc coming in the next year.

Llamadeus
Dec 20, 2005
I run the latest for a few hours (sometimes it crashes/errors when other stuff doesn't), which I can't imagine to be that much more damaging than running "normal" 100% loads with the same overclock for hours at a time over years.

Llamadeus
Dec 20, 2005

Paul MaudDib posted:

Asus's Haswell-E Overclocking guide:


24 hours of 400W load can easily be worse than years of 200W load. Electromigration is an exponential effect, not a linear one. The more power you pull and the hotter you run, the worse it's gonna be.
This is surely an exaggeration, have you honestly ever seen any numbers that suggest that Prime95 is literally double the power draw of a normal load?

In any case I'd expect anyone pushing the limits of >300W CPUs to just use some actual common sense in this case.

Llamadeus
Dec 20, 2005

Scionix posted:

Isn't an overclocked 2700x about 8700k performance?
Only in very parallel workloads like Cinebench or x264. In single thread dependent applications and games there's still quite a large gulf.

The 9900K will be ahead in both so they'll be able to charge a large premium for it, maybe even after the next generation of Ryzens launch.

Llamadeus fucked around with this message at 21:23 on Oct 12, 2018

Llamadeus
Dec 20, 2005

Truga posted:

40% upscaling means it's rendering 1.4 times pixels your monitor has.
Usually the value controls the amount per axis, so it's 1.96 times as many pixels~

Though a few do it by total pixel count instead.

Llamadeus fucked around with this message at 00:19 on Oct 21, 2018

Llamadeus
Dec 20, 2005
That single fan doesn't look any bigger than the dual fans though, it's just a smaller heatsink to save costs and for small form factor builds. And heatsink surface area is a more important factor.

Llamadeus
Dec 20, 2005
There's one unused fan slot at the bottom :v:

Llamadeus
Dec 20, 2005
Digital Foundry also had a weird outlier where the 5700 and 5700 really underperformed at 1080p and 4K in Metro Exodus (but not 1440p): https://www.eurogamer.net/articles/digitalfoundry-2019-amd-radeon-rx-5700-rx-5700-xt-review-head-to-head-with-nvidia-super?page=5

I'd guess this is something fixable in a driver update.

Llamadeus
Dec 20, 2005
The other disadvantage to the probe approach is that the resolution of the GI is limited by the density of the probes. The per-pixel approach has more refined higher frequency detail, so the probes are more reliant on screen space AO to make up for it.

Still a pretty good trade-off if your main goal is to capture the general feel of the lighting in a space (and no worse than current baked probes).

Llamadeus
Dec 20, 2005

Zotix posted:

How much better at 1440p is a 2070 Super than a 1070GTX? I know we are basically talking 2+ generations of video cards, but I'm building an entirely new rig with a 3900x in it, and I was going to carry over my 1070 for a year until the next gen is fully released. However, I'm not sure of the performance I'll see on my new rig with the 1070. I'm upgrading from a 3570k to a 3900x, so I'm assuming I'll have some performance increase.
They're kinda more like one generation apart, but it's a substantial jump in the worst cases: https://www.techspot.com/review/1865-geforce-rtx-super/

Llamadeus
Dec 20, 2005

ZobarStyl posted:

Having said that, I couldn't find anything on how the 3400G is laid out internally - does it use a distinct GPU chiplet or is it a single die with a CCX and GPU cores?
The latter: https://en.wikichip.org/wiki/amd/microarchitectures/zen#APU

Llamadeus
Dec 20, 2005
Well also keep in mind that that article is written more in the context of buying TVs for watching video, which generally has closer to ideal levels of aliasing.

Llamadeus
Dec 20, 2005

Enos Cabell posted:

Weird, I have a 9700k but haven't noticed any stuttering at all. 1080ti @1440p mostly high settings, averaging around 65fps.
Not that weird, a bit later in the video they show that the stuttering only occurs if the framerate gets too high and goes away at 1440p medium with a 2080Ti.

Llamadeus
Dec 20, 2005

Taima posted:

Yeah definitely. What’s the deal with that? It was also sometimes like an overexposed photo.
I think this has to do with the way the HDR colour space and transfer curve are defined (in BT.2020/2100 and ST 2084 and other standards on top like Dolby Vision).

The formats are futureproofed to the point that they specify much wider gamuts and higher peak brightnesses than any consumer display is capable of, so the display itself has to do an additional tone mapping operation to make it look acceptable.

And I assume HDR monitors are not as good at this as TVs are if they're doing anything at all.

Llamadeus fucked around with this message at 17:28 on Dec 10, 2019

Adbot
ADBOT LOVES YOU

Llamadeus
Dec 20, 2005
"PS5 = 5500" is beyond lowballing it though, the current Xbox One X already has 5500 level perfomance and the fanboys are already mildly disappointed by the leaked 36 CU number.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply