Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Tres Burritos
Sep 3, 2009

Suspicious Dish posted:

Vulkan? No, it will be freely available and open upon release. That's why every part of it has to go through arduous legal review processes, because if you accidentally sneak in a texture from a copyrighted game, well, you can't release that publicly.

Drivers, reference documentation, samples, SDKs, tooling, conformance test suites, benchmarks and more will all be available when it is released.

I cannot wait to be confused.

Adbot
ADBOT LOVES YOU

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
When you're drawing to a cube map, how do you assign a render buffer to the depth attachment? Can't figure out how to make it layered based on a google search, and I assume I need a 6-layered renderbuffer to do depth testing for a 6-layered render target?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
OpenGL or Direct3D?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
OpenGL

Joda fucked around with this message at 02:08 on Dec 20, 2015

dlr
Jul 9, 2015

I'm fluid.
Are there any nice, official, guides to modern OpenGL that I could read?

Jewel
May 2, 2009

Some good modern opengl tutorials/guides I remember offhand are

http://www.learnopengl.com/ (probably the best one here, though a few later pages are still in development)
http://ogldev.atspace.co.uk/index.html (seems really good too! maybe not as good as the first one though)
http://www.opengl-tutorial.org/ (was the one that I used, but I think the others are better now)
http://open.gl (not super long, but live demos in webgl)
http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html (seems good at going in depth of why/what)

The first one is really really good; full of diagrams, nicely formatted code snippets, screenshots, additional resources, and source code.

This article is a pretty good example http://www.learnopengl.com/#!Advanced-Lighting/Shadows/Shadow-Mapping

Jewel fucked around with this message at 15:10 on Jan 2, 2016

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Are there any good resources that explain how exactly pipeline implementations cast 11/10/16-bit float texture reads to 32-bits and the other way around for color attachment writes? I'm trying to figure out how to encode a single bit of information in the LSB of the mantissa on a 16-bit float channel in a way that will be maintained when it's written to/read from the texture unit. I tried just encoding it into the 14th least significant bit of the 32-bit float in the shader (since there's a 13-bit precision difference between the mantissa on 16-bit and 32-bit floats,) but it doesn't seem to work. I assume this is because the cast is more complicated than just removing the 13 least significant bits and adjusting the exponent accordingly. The lack of available information on this subject makes it really hard to work with, so any resources would be appreciated.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Uh, how did you get a 10/11-bit float texture? The only way I know to specify float textures is GL_FLOAT, unless you're using GL_HALF_FLOAT_ARB, in which case you can't cast a half to a float in GLSL, AFAIK.

Sex Bumbo
Aug 14, 2004

Suspicious Dish posted:

Uh, how did you get a 10/11-bit float texture? The only way I know to specify float textures is GL_FLOAT, unless you're using GL_HALF_FLOAT_ARB, in which case you can't cast a half to a float in GLSL, AFAIK.

https://www.opengl.org/wiki/Small_Float_Formats

What are you trying to do Joda?

Sex Bumbo
Aug 14, 2004
Has anyone used Raster Ordered Views / pixel sync and been like whoaaa that's slow? I feel like depth peeling would actually be faster.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

I'm trying to use a RG16F to encode normals for scene that is centrally projected, since it saves me a texture tap. I did some testing that shows that access to an RGB16F is much slower than RG16F/R11G11B10, probably because it requires an extra texture read. The 11_11_10 texture is too imprecise for normals. The problem is that since it's projected with central projection there's a possibility for both negative and positive Z coordinates so I need to buffer the sign of the Z-coordinate and I thought I could do that by sacrificing precision on the Y-coordinate.

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Joda posted:

I'm trying to use a RG16F to encode normals for scene that is centrally projected, since it saves me a texture tap. I did some testing that shows that access to an RGB16F is much slower than RG16F/R11G11B10, probably because it requires an extra texture read. The 11_11_10 texture is too imprecise for normals. The problem is that since it's projected with central projection there's a possibility for both negative and positive Z coordinates so I need to buffer the sign of the Z-coordinate and I thought I could do that by sacrificing precision on the Y-coordinate.

You could use UNORM surfaces and then use shader language intrinsics to unpack them manually as needed (I believe this is recommended SOP). Some APIs will even let you alias a surface as float or UNORM.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Hubis posted:

You could use UNORM surfaces and then use shader language intrinsics to unpack them manually as needed (I believe this is recommended SOP). Some APIs will even let you alias a surface as float or UNORM.

Not sure what a UNORM surface is, but your post lead me to this , which is perfect for what I want to do. I still need to figure out how to pack the 11_11_10 format into an unsigned int so I can have all my data in a single RGBA32UI texture, but one thing at a time.

I'm not sure why I'm getting compiler errors:

code:
uint packNormal(vec3 norm){
    uint packed = packSnorm2x16(norm.xy);

    if(norm.z >= 0.0f)
        packed |= 1u;
    else
        packed &= 4294967294u;

    return packed;
}
And I'm getting this error:

code:
A shader ../shaders/gen_gbuffer.frag did not compile
0(31) : error C0000: syntax error, unexpected '=', expecting ';' or '(' at token "="
0(34) : error C0000: syntax error, unexpected "|=", expecting "::" at token "|="
0(36) : error C0000: syntax error, unexpected "&=", expecting "::" at token "&="
0(38) : error C0000: syntax error, unexpected ';', expecting "::" at token ";"
0(30) : error C1110: function "packNormal" has no return statement
E: Apparently that's not the problem. Whenever I try to assign a value to a uint variable in the shader, I get those errors. So I can return packSnorm2x16(norm.xy); without any issues, but when I have a temporary local uint variable (whether I do the bit-wise operators on it or not,) it gives me errors. Even something as simple as returning packed right after initialising it.

E2: It works if I just return it in one line with an ugly in-line if-statement. I'm getting some weird results, though.

code:
uint packNormal(in vec3 norm) {
    return (norm.z >= 0.0f) ? packSnorm2x16(norm.xy) | 1u : packSnorm2x16(norm.xy) & 4294967294u;
}
unpack:
code:
vec3 unpackNormal(uint packedNormal) {
    float Z_sign = sign(float(packedNormal % 2u) - 0.5f);
    vec2 normXY = unpackSnorm2x16(packedNormal);

    return normalize(vec3(normXY.x,normXY.y,Z_sign * sqrt(1 - normXY.x*normXY.x + normXY.y*normXY.y)));
}
It's probably a problem with my sign packing. I'm trying to write 0 to the LSB if Z < 0 and 1 if Z > 0, and extract it by just taking modulo 2 on the packed uint (which should return 1 or 0 accordingly.)

Result:


The table and the ceiling should both be completely illuminated by the light source, which is places right around where you can see a bright spot in the ceiling. These black patches are only occuring along horizontal (in relation to the camera) surfaces. Vertical surfaces like the walls seem to work fine.

Joda fucked around with this message at 20:17 on Jan 6, 2016

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...
Yup! That extension is exactly what I meant. Not sure what's up with the shader error, though...

Nahrix
Mar 17, 2004

Can't afford to eat out

Xerophyte posted:

This is 2D graphics but I haven't seen anyone link to Benedikt Bitterli's Tantalum here, which is a shame since it's cool as hell. He also includes a full derivation of the 2D light transport formulas. :swoon:

E: It's also really pretty, ok?


E2: So pretty...




Nifty!

lord funk
Feb 16, 2004

So when rendering a face, the normal points in one direction, so the 'front' of the face will reflect light, but the back does not. Is there a way to get the back to act the same way as the front? i.e., is there a way to detect that the face is being drawn facing the wrong-way-round so I can invert my normal?

edit: just realized, can I just check the clockwise-ness of the face when I update the normal? going to try that out...

lord funk fucked around with this message at 16:51 on Feb 16, 2016

haveblue
Aug 15, 2005



Toilet Rascal
It's probably easier to add reversed polygons to the underlying model, doing a normal flipping pass each frame doesn't sound like the right approach.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

lord funk posted:

So when rendering a face, the normal points in one direction, so the 'front' of the face will reflect light, but the back does not. Is there a way to get the back to act the same way as the front? i.e., is there a way to detect that the face is being drawn facing the wrong-way-round so I can invert my normal?

edit: just realized, can I just check the clockwise-ness of the face when I update the normal? going to try that out...

For a simple lambertian shader you can take the abs() of dot(lvector,normal). Since the absolute cosine is mirrored around the plane for the normal.

E: And I just realised that will mirror the light as well, so never mind.

Doc Block
Apr 15, 2003
Fun Shoe
Turn off back face culling. The shading language should have a way to determine if the current fragment is on the front or back face (Metal shading language does, GLSL probably does too).

edit: never mind, that doesn't help determine if the winding is wrong.

Tres Burritos
Sep 3, 2009

Aw yeah vulkan drivers and apis dropped today.

Xerophyte
Mar 17, 2008

This space intentionally left blank
It'd help knowing what framework you're in, but in general you can fix your normals when shading if you want two-sided rendering. Something like...

code:
// The interpolated at-vertex normals, with bump/normal mapping and all. What you actually shade with.
float3 shading_normal = Bump(u_normal);

// The triangle geometry normal, constant over the triangle. I.e. normalize(cross(v0 - v1, v0 - v2))
float3 geometry_normal = u_geometry_normal;

// If the direction to the camera is not in the same hemisphere as the geometry normal, invert the normals.
float3 here_to_camera = u_camera_position - world_position;
if (dot(here_to_camera, geometry_normal) < 0.0f) {
  shading_normal  = -shading_normal;
  geometry_normal = -geometry_normal;
}
Bear in mind that the interpolated and bumped normals can be backfacing for entirely other reasons, which you generally also want to fix somehow.

E: A co-worker pointed out that the geometry normal isn't typically available in GPU-land, but gl_FrontFacing is and serves the same function here.

Xerophyte fucked around with this message at 17:50 on Feb 16, 2016

Sex Bumbo
Aug 14, 2004

Tres Burritos posted:

Aw yeah vulkan drivers and apis dropped today.

:getin:

Sex Bumbo
Aug 14, 2004
I'm honestly impressed with how many hoops every loving vulkan thing wants you to jump through. Guess that's what you miss out on when the api isn't made by msft. I think this is the first time I had to upgrade my python version to install an sdk for a low level graphics api. Good job guys.

Minsky
May 23, 2001

You shouldn't need Python to install the SDK.

Unless you are compiling the loader from source or something.

The only Python scripts that ship with the SDK are magic numbers for people who want to write a Python-based SPIRV compiler.

edit: sorry, I'm guessing you're probably on Linux and its installer depends on Python for whatever reason.

Minsky fucked around with this message at 02:15 on Feb 17, 2016

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Sex Bumbo posted:

I'm honestly impressed with how many hoops every loving vulkan thing wants you to jump through. Guess that's what you miss out on when the api isn't made by msft. I think this is the first time I had to upgrade my python version to install an sdk for a low level graphics api. Good job guys.

It's more "This is what happens when you want the vendor driver stack to get out of the way".

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Congrats, you get to write the vendor driver stack!

Minsky
May 23, 2001

Well, the "you" in this case being someone like Carmack, or the people at Epic. The new APIs are targeted for people who want that kind of control, like triple-A engine developers.

OpenGL and D3D11 are still around if you just want to draw triangles, and there's plenty of ways to push much of the crap on top out of the way (AZDO). The more you do that though, the closer you get to Vulkan development anyway.

In any case I've basically been breathing Vulkan for about six months now, so if there's technical questions about the API I can try to answer them.

Sex Bumbo
Aug 14, 2004
I was referring to http://lunarg.com/vulkan-sdk/, the first result for vulkan sdk. Obviously the actual api or writing directly against it doesn't require python, I just wanted to fire up some demos and look at the related source without needing a bunch of dependencies.

Minsky posted:

In any case I've basically been breathing Vulkan for about six months now, so if there's technical questions about the API I can try to answer them.

So far (~a days worth of browsing) I haven't seen any meaningful differences between it and DX12, can you comment on the comparison?


Subjunctive posted:

Congrats, you get to write the vendor driver stack!

This sounds not very fun but also not what Vulkan/DX12 is. They're both about as vendor agnostic as they can be without screwing anyone over as far as I can tell. I don't think DX12 does much in the way of tile based rendering (not to be confused with tiled resources) but it's not like msft has a mobile department they need to care about.

Sex Bumbo
Aug 14, 2004

Minsky posted:

OpenGL and D3D11 are still around

Also I disagree with this, OpenGL should be forgotten forever. It sucks enough that I'd rather use, I don't know, literally anything else if I need some trivial graphics task done.

I don't really get the push to keep people futzing around on DX11. I know it's going to still be supported, but even if I'm just dicking around at home, DX11 is so unsatisfying now. People aren't idiots, they can figure out how to initialize swap chains and descriptor tables and learn about the hardware at the same time. It's fun.

Sex Bumbo fucked around with this message at 03:27 on Feb 17, 2016

Tres Burritos
Sep 3, 2009

Sex Bumbo posted:

I just wanted to fire up some demos and look at the related source without needing a bunch of dependencies.

This works for me on windows

https://github.com/SaschaWillems/Vulkan

Sex Bumbo
Aug 14, 2004
Yeah that worked for me.

Minsky
May 23, 2001

Sex Bumbo posted:

So far (~a days worth of browsing) I haven't seen any meaningful differences between it and DX12, can you comment on the comparison?

I can't really answer that question with confidence because I haven't looked at DX12 in detail. I only have a sort of "under the hood" understanding of how a lot of DX12 maps to the HW, but that's not enough really enough to compare the two at an API level. In terms of API knowledge I'm mostly familiar with Vulkan, Mantle and OGL. Overall, my bet is that there is probably not much different between DX12 and Vulkan or Mantle. If you're familiar with programming one of those APIs, the other two will probably come at little extra cost.

I think it's safe to say that one of the things where Vulkan fundamentally evolved from Mantle (and likely DX12) is that they added stuff to the API to make it friendlier to program GPUs that have tiled rendering architectures, namely most mobile GPUs. It's probably the first API that treats tiled rendering as a first-class construct (Metal does too probably; I'm just not familiar with it).

That's why in Vulkan you don't "bind a surface as a color/depth target". You instead begin a render pass which starts by "loading" (and optionally clearing) memory from one of those surfaces into a renderable memory, and once the render pass ends you can "store" that memory back to the surface, or discard it if you don't need it anymore. You can probably fill in the blanks about how that maps to tiled GPUs and more "traditional" GPUs.

It's a little bit annoying because you have to make another object just to start rendering, but I think it goes a long way in being able to write portable renderer code between tiled and non-tiled GPUs.

Minsky fucked around with this message at 03:55 on Feb 17, 2016

Sex Bumbo
Aug 14, 2004
Rip windows phone :(

Actually the Xbox 360 used to have a tiled renderer, I guess the xbone doesn't.

Sex Bumbo fucked around with this message at 04:03 on Feb 17, 2016

Minsky
May 23, 2001

Sex Bumbo posted:

Also I disagree with this, OpenGL should be forgotten forever. It sucks enough that I'd rather use, I don't know, literally anything else if I need some trivial graphics task done.

I don't really get the push to keep people futzing around on DX11. I know it's going to still be supported, but even if I'm just dicking around at home, DX11 is so unsatisfying now. People aren't idiots, they can figure out how to initialize swap chains and descriptor tables and learn about the hardware at the same time. It's fun.

Swap chains and descriptors are all fine and nice and easy to understand.

The thing that scares me most about looking at Vulkan code as someone who writes drivers is the responsibility for application to handle resource barriers. Meaning, if you render to a texture in one draw and then want to read from it in another draw, you the app developer have to manually put a barrier command there in between to make sure the first draw finishes and the relevant caches are synchronized. In OGL/DX, the driver would detect all of this for you.

This puts more control in your hands, but it also can introduce a lot more hardware-specific errors that you may not be aware of if you choose to primarily develop on a particular hardware vendor's GPU that happens to have coherent caches between those two kinds of operations. Vulkan ships with a debug runtime to catch these kinds of mistakes, but it is probably not very mature just yet.

Edit:

Sex Bumbo posted:

I was referring to http://lunarg.com/vulkan-sdk/, the first result for vulkan sdk. Obviously the actual api or writing directly against it doesn't require python, I just wanted to fire up some demos and look at the related source without needing a bunch of dependencies.

I don't know what that link is (it's also dead now). I think the official place for the SDK is at http://vulkan.lunarg.com (don't bother trying to sign in, just scroll to the bottom and click one of the platform links).

Minsky fucked around with this message at 04:15 on Feb 17, 2016

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Minsky posted:

This puts more control in your hands, but it also can introduce a lot more hardware-specific errors that you may not be aware of if you choose to primarily develop on a particular hardware vendor's GPU that happens to have coherent caches between those two kinds of operations. Vulkan ships with a debug runtime to catch these kinds of mistakes, but it is probably not very mature just yet.
But wasn't that the case already with things like OpenGL shader syntax being interpreted differently by different vendors? I agree having a DX-style debug runtime would be good, though!

The_Franz
Aug 8, 2003

Sagacity posted:

But wasn't that the case already with things like OpenGL shader syntax being interpreted differently by different vendors? I agree having a DX-style debug runtime would be good, though!

No, that was a case of every vendor doing their own thing when it came to the implementation coupled with the lack of a real reference compiler for a long time. This is a case of different hardware being different. You wanted explicit APIs that let you micro-manage everything? Well, you've got them along with all of the associated headaches.

Minsky
May 23, 2001

My neurosis just stems from experience over how easy it is to miss a certain bit in one of these barriers, coupled with how tempting it is to not care because past experience tells you that going back to add it will make no visual difference on your system. I even do it myself when I write private test apps to test some random bug: I deliberately skip certain barriers because it's a pain in the rear end to write one everywhere where it's technically necessary, and I do it because I know on the HW I'm targeting those barriers aren't necessary... but I bet if I run it on something else, things will go disastrously wrong.

You just gotta really pay attention with these new APIs, and it's the reason why I don't look down on anyone wanting to stick with OGL and D3D. Those APIs are old and have warts (OGL especially), but they are categorically easier to program. I don't think I would ever choose to write anything anymore on those old APIs in favor of the new ones, but it's easy for me to say that since I already am comfortable with the new API.

Also SPIRV is a really really well-thought out intermediate representation. I look forward to what compiler frontend writers and tools programmers do with it.

Tres Burritos
Sep 3, 2009

Minsky posted:

Also SPIRV is a really really well-thought out intermediate representation. I look forward to what compiler frontend writers and tools programmers do with it.

So the deal with SPIRV is that you'll compile your shader language to SPIRV, which is then what the GPU runs, right? This'll allow people to come up with new languages to run on GPUs? So maybe academic types will come up with something that works really well for astronomy computations (or something) and then they can write code in that instead of GLSL or whatever? Is this where the "new generation graphics and compute API" comes in?

I don't really "get" SPIRV.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
It's acknowledging that everyone wants to write their shaders, once, in HLSL, by providing a compile-to target. There aren't going to be that many SPIR-V languages.

Adbot
ADBOT LOVES YOU

Sex Bumbo
Aug 14, 2004
A gpu almost never runs anything you give it directly, because different hardware will have different byte codes. Also the driver doesn't compile anything until you create a pipeline in vk/dx12, or render something in gl/dx11-. That's why there's always a huge stall in older APIs when you first draw things.

Since spirv is documented and there's also a llvm<>spirv thing, you can do stuff like write a spirv->hlsl tool, or c->spirv, etc.

Minsky, have you been using the reference glslang tool or something else?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply