Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Tres Burritos
Sep 3, 2009

So I figured out the math(s) to make a camera in OpenGL but now I'm having problems with my normals. This is what happens:
https://www.youtube.com/watch?v=5u-57Rst0Q8

That's supposed to be 3 triangles of a cube, so that one vertical side is messed up ... somehow.
Am I doing something wrong here?
code:
// Clear Screen And Depth Buffer
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);   
		
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, VBH.vbo);
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL2.GL_COLOR_ARRAY);
gl.glEnableClientState(GL2.GL_NORMAL_ARRAY);
gl.glColorPointer(3, GL.GL_FLOAT, VBH.vertexStride, VBH.colorPointer);
gl.glVertexPointer(3, GL.GL_FLOAT, VBH.vertexStride, VBH.vertexPointer);
gl.glNormalPointer(GL.GL_FLOAT, VBH.vertexStride, VBH.normalPointer);
//gl.glDrawArrays(GL.GL_TRIANGLES, 0, (VBH.vertices.length / 3));
gl.glDrawArrays(GL.GL_TRIANGLES, 0, VBH.Triangles.length * 3);
	    
	    
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_COLOR_ARRAY);
gl.glDisableClientState(GL2.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);

Adbot
ADBOT LOVES YOU

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Did you forget to enable depth testing?

Tres Burritos
Sep 3, 2009

Suspicious Dish posted:

Did you forget to enable depth testing?

Goddamn it's always 1 line fixes. Thanks.

edit: Also, yaaaaay
https://www.youtube.com/watch?v=aDgyxYlZaLU

Tres Burritos fucked around with this message at 01:59 on Jun 18, 2013

movax
Aug 30, 2008

I don't know if we have a dedicated GPGPU/CUDA thread, are there folks in this thread that have played with CUDA? Looking to figure out the best way to tune threads/blocks/grids (or write a simple scheduler to parcel up the workload based on input size).

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

movax posted:

I don't know if we have a dedicated GPGPU/CUDA thread, are there folks in this thread that have played with CUDA? Looking to figure out the best way to tune threads/blocks/grids (or write a simple scheduler to parcel up the workload based on input size).

Have you looked at the occupancy calculator that comes with the SDK? It's probably the best place to start.

As for variable workloads, the best practice is to put all of your work "packets" into an device memory array, with a size and a "next" counter. Then dispatch a kernel as a "maximal launch" (i.e. a number of blocks guaranteed to fill all the processors in the GPU). In each kernel, run a loop that does a global atomic fetch-and-increment on the "next" counter. If the value is greater than or equal to your workload size, have the kernel exit; otherwise, fetch the workload description for the index you receive and process it in that iteration. The kernel will end once all the blocks have finished their work and there are no more entries in the queue.

This works great for variable sized workloads of discrete independent tasks which can each still take advantage of block-level parallelism and where you know the workload beforehand. There are also ways to leave a workload processor "spinning" and feed work batches into it as a real-time producer/consumer, but they get a bit trickier.

Tres Burritos
Sep 3, 2009

So I've got a cube(and all its vertices). And I want to rotate that cube around its center point. What the hell formulas should I be looking at? To move my camera around I've been using a lot of trig. Generally people do this with matrices, right? Does anyone have a link that has lots of easy examples or something?

haveblue
Aug 15, 2005



Toilet Rascal
What graphics API are you using? Either it has a built-in transformation matrix manager or it should be easy to scare one up online. If you find yourself actually composing rotation formulas by hand you are doing something very wrong (or at least inefficient).

Tres Burritos
Sep 3, 2009

haveblue posted:

If you find yourself actually composing rotation formulas by hand you are doing something very wrong (or at least inefficient).
Right? That's how I feel, but I (re)learned myself a whole bunch of trig.

I'm just using Java OpenGL (JOGL) with associated glu bindings.

I mean, I've seen glRotate, but I'm using a VBO and that method would only work if I have one VBO per cube, correct? You'd pop out the vertices/colors/normals and then just call glRotate()?

edit2: Oh hah, so instead of say, specifying my cubes center is at (1,1,1) and then manually making the points from there, you just say, this cube has points at (-0.5f, 0.5f, 0.5f), (0.5f, 0.5f, 0.5f), etc. and then before you render it, you translate the matrix to (1,1,1) and then you apply your rotations?

That makes everything so much easier.

Feelin' really :pseudo:

Tres Burritos fucked around with this message at 03:41 on Jun 26, 2013

Spatial
Nov 15, 2007

I was reading up on how 6-bit TN monitors simulate 8-bit colour channels with temporal dithering. It occured to me that, hey, we've got fairly high precision floats in our fragment shaders. Why not apply some dithering there instead of truncating down to 8-bit?

I wrote a simple ordered dithering shader to simulate 10-bit colour channels and it worked pretty well.

No dither:


Dither:


Closeups of the the dithering patterns (half on, half off):




I exaggerated the brightness and contrast in these screenshots to make it easier to see the effect. Normally the difference is primarily noticable on moving gradients - in particular the point where the light falls off to zero can't be seen with dithering enabled, while it's very obvious without it.

Spatial
Nov 15, 2007

A couple of OpenGL questions. I'm using an OpenGL 3.3 core context.

On my GTX 660 Ti, MAX_COLOR_TEXTURE_SAMPLES is 32. However when I make a framebuffer with >16 samples GetMultisamplefv fails to retrieve any sample position above 16. Presumably there are only actually 16 sampling positions. So what is the GPU actually doing? Supersampling? FXAA? It does render differently: in my 2D game it appears to blur the entire screen a little bit whereas the levels <= 16 are pixel-accurate like they should be.

Does the sample count map into the more commonly known terms like 2x, 4x etc? I'd like to know so I can expose the option in my game in a way that is easy to understand.

BufferSubData vs MapBufferRange. When does it matter?

Why is the default point-sprite origin upper-left when every other origin is lower-left?

I want to invert the Y axis globally. I know how to do this but should I? I find it way easier to think in terms of Y+ == down because I'm used to DirectX. But maybe there are pitfalls that will frustrate me later that I don't know about yet.

Madox
Oct 25, 2004
Recedite, plebes!
Hey guys, I'm hoping to get some input from you on Unity. Currently I use DirectX 9 directly (C++) and via XNA. I want to start using DX11 which means porting a lot of my code from C# to C++, but maybe Unity is a good way for me to go. I see that I can still write all my own shaders in Unity, as well as create meshes on the fly, but have some concerns:

Unity website says that only the Pro version has realtime shadows, but I already have all my own code for realtime shadows. Surely I can still use that without needing Pro?

They say that render to texture is a Pro feature. Am I reading this right? There is no way to render off screen without the Pro version? No way to render a UI sperately and blend it in? This is pretty make or break for me since I use a lot of render to texture for shader effects and procedural textures.

No debugging? Can i still use things like the NVidia special tools to debug without Pro?

I'll be looking into Unity this weekend, but hopefully someone that uses it can give me a head start.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Madox posted:

They say that render to texture is a Pro feature. Am I reading this right? There is no way to render off screen without the Pro version? No way to render a UI sperately and blend it in? This is pretty make or break for me since I use a lot of render to texture for shader effects and procedural textures.
There's a lovely workaround way that you certainly can't use in real-time (if I recall correctly it involves rendering to screen, taking a screenshot, and making a texture out of it). But yeah, that's how they hook you into needing Pro. It's a pretty fair line if you think of the non-Pro as kind of a demo version, given that Pro isn't all that outrageously priced. Gives you plenty to work with, but restricts you from using a thing that's really useful but not completely mandatory.

You could just render a UI onto the same screen and blend it using shaders while you're rendering, in a pinch. The thing you really can't do without render to texture is things like in-game screens for security cameras, or Portal/Prey portals.

quote:

No debugging? Can i still use things like the NVidia special tools to debug without Pro?
There's certainly debugging. Do you mean shader debugging?

Madox
Oct 25, 2004
Recedite, plebes!

roomforthetuna posted:

There's a lovely workaround way that you certainly can't use in real-time (if I recall correctly it involves rendering to screen, taking a screenshot, and making a texture out of it). But yeah, that's how they hook you into needing Pro. It's a pretty fair line if you think of the non-Pro as kind of a demo version, given that Pro isn't all that outrageously priced. Gives you plenty to work with, but restricts you from using a thing that's really useful but not completely mandatory.

You could just render a UI onto the same screen and blend it using shaders while you're rendering, in a pinch. The thing you really can't do without render to texture is things like in-game screens for security cameras, or Portal/Prey portals.

There's certainly debugging. Do you mean shader debugging?

My bad, I meant profiling. GPU profiling is listed as Pro.

Ya, its not a bad price, but it still doesn't make sense for hobbiest-type indie devs like me, who don't ever expect to see $1500 profit from our apps. Especially when DirectX lets me do the things I want, though the cost is time and effort. I'll keep trying it out this weekend though.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Madox posted:

Ya, its not a bad price, but it still doesn't make sense for hobbiest-type indie devs like me, who don't ever expect to see $1500 profit from our apps. Especially when DirectX lets me do the things I want, though the cost is time and effort. I'll keep trying it out this weekend though.
Right, but it still makes sense to use the free version for hobbyist indie devs who are willing to sacrifice render-to-texture since that's a feature rarely all that vital to an indie project.

ianfx
May 17, 2003

I am rendering to a texture, and I need to be able to do some special blending. Is this somehow possible? (OpenGL)

code:
  if(src.a > dst.a) {
    dst = src;
  } else {
    // Don't modify the pixel
  }
I can accomplish dst.a = max(src.a, dst.a) by using glBlendEquationSeperate(..., GL_MAX), but I can't figure out how to get the RGB to be overwritten with what I want.

Are there any OpenGL extensions that allow some sort of programmable blending? I know in OpenGL ES 2 there is an extension to get glLastFragColor in the shader, which would allow you to do this.

Schmerm
Sep 1, 2000
College Slice

ianfx posted:

I am rendering to a texture, and I need to be able to do some special blending. Is this somehow possible? (OpenGL)

code:
  if(src.a > dst.a) {
    dst = src;
  } else {
    // Don't modify the pixel
  }
I can accomplish dst.a = max(src.a, dst.a) by using glBlendEquationSeperate(..., GL_MAX), but I can't figure out how to get the RGB to be overwritten with what I want.

Are there any OpenGL extensions that allow some sort of programmable blending? I know in OpenGL ES 2 there is an extension to get glLastFragColor in the shader, which would allow you to do this.

Maybe you could use glAlphaFunc(GL_GREATER) to do alpha testing - discard all incoming fragments whose alpha is less than or equal to the destination fragment. This will prevent the fragment shader from even executing, thus implementing your 'else' condition. Then, a trivial fragment shader (dst=src) for when that alpha test passes.

edit: don't forget to glEnable alpha testing (I think it's GL_ALPHA_TEST or something)

The_Franz
Aug 8, 2003

Schmerm posted:

Maybe you could use glAlphaFunc(GL_GREATER) to do alpha testing - discard all incoming fragments whose alpha is less than or equal to the destination fragment. This will prevent the fragment shader from even executing, thus implementing your 'else' condition. Then, a trivial fragment shader (dst=src) for when that alpha test passes.

edit: don't forget to glEnable alpha testing (I think it's GL_ALPHA_TEST or something)

glAlphaFunc was deprecated in OpenGL 3.0 and removed from the core in 3.1. The correct way to do it now is to test the alpha values directly in a shader.

Schmerm
Sep 1, 2000
College Slice

The_Franz posted:

glAlphaFunc was deprecated in OpenGL 3.0 and removed from the core in 3.1. The correct way to do it now is to test the alpha values directly in a shader.

How do you access the destination fragment's r/g/b/a values from the fragment shader? You'd need that to duplicate the functionality lost by deprecating fixed-function blending and alpha testing.

edit: nevermind. Alpha testing is only between the incoming fragment and constant values, which are definitely doable in shaders, and totally useless for the original quoted problem.

Schmerm fucked around with this message at 20:08 on Jul 10, 2013

Xerophyte
Mar 17, 2008

This space intentionally left blank

ianfx posted:

I am rendering to a texture, and I need to be able to do some special blending. Is this somehow possible? (OpenGL)
I can accomplish dst.a = max(src.a, dst.a) by using glBlendEquationSeperate(..., GL_MAX), but I can't figure out how to get the RGB to be overwritten with what I want.

Are there any OpenGL extensions that allow some sort of programmable blending? I know in OpenGL ES 2 there is an extension to get glLastFragColor in the shader, which would allow you to do this.

In short, no.

In long, programmable blending has been mentioned as a thing that might be added since 3.0 or so I think but as the hardware still has limited fixed-function blending any programmable performance would currently be terrible and there aren't any extensions for it. I guess there will be vendor-specific extensions when there is vendor specific hardware, which apparently is now for stuff that supports GL_APPLE_shader_framebuffer_fetch or GL_NV_shader_framebuffer_fetch in ES. I assume there's something about the pipelines on the embedded GPUs that made this easier to implement there and if anyone knows what that might be I'm definitely curious.

If you need programmable blending then you need to ping-pong between buffers. Have two textures and alternate which is source and which is destination at each draw (performance: possibly bad) or suitable batch of draws (performance: possibly okay). It might also be done as a depth test based on the description, depending on what you're trying to accomplish here.

[E:] Ah. I'm also assuming you're drawing full screen quads, or at least the same region every time. If that's not true then I guess you need to either copy the texture data rather than swap buffers or draw everything twice, neither of which seems very palatable. If you just need this locally you could also try to have the same texture bound as both a uniform and in the active framebuffer object: it's undefined behavior but allowed and might do what you want on your end.

Schmerm posted:

Maybe you could use glAlphaFunc(GL_GREATER) to do alpha testing - discard all incoming fragments whose alpha is less than or equal to the destination fragment. This will prevent the fragment shader from even executing, thus implementing your 'else' condition. Then, a trivial fragment shader (dst=src) for when that alpha test passes.

Apart from the post you wrote above, note that this definitely won't prevent the fragment shader from executing. The fragment shader is where the alpha value is calculated, if it's not run then there's not much alpha to test with.

Xerophyte fucked around with this message at 08:36 on Jul 11, 2013

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
I'm looking to diagnose a stuttering issue I'm having in OpenTK (C#/OpenGL bindings library). What I want is a non-OpenTK (read: pretty much anything!) program where a large triangle moves back and forth across the screen. I don't have any C++ dev environment set up or I'd just make it myself, does anyone have a link to anything of the sort, like a suite of simple OpenGL examples compiled?

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"

Orzo posted:

I'm looking to diagnose a stuttering issue I'm having in OpenTK (C#/OpenGL bindings library). What I want is a non-OpenTK (read: pretty much anything!) program where a large triangle moves back and forth across the screen. I don't have any C++ dev environment set up or I'd just make it myself, does anyone have a link to anything of the sort, like a suite of simple OpenGL examples compiled?

I'd suggest using the visual studio CPU profiler. This should at least get you within the ballpark for your high latency frames. As for demo apps that aren't openTK, try anything from the 2.5 branch of sharpdx in the toolkit examples. They also have examples on how to debug if your GPU is causing the spikes (I forget exactly where).

A free alternative if your version of visual studio doesn't have a profiler is sharpdevelop, it contains a reasonably good profiler.

Another option if you are using a version of visual studio before 2012 and have an nvidia card, Nvidia nsight is pretty great.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
What would I even be looking for in the CPU profiler?

Also I forgot to note that trying SharpDX probably won't help much--this problem is non-existent with DirectX. I was using SlimDX for a long time and it has no issues at all (I still have a copy of the engine in SlimDX and it runs smooth, no jitter or stutter or anything).

Thanks though, I will check out Nvidia nsight.

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"
If your application is single threaded and you are using blocking API calls, you can find which calls are taking all the time.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
You know, I think I had a longer post written out originally which had more detail, and then I rewrote it and forgot to mention a few things, sorry. Maybe this will clarify:

The framerate is consistent. I've written to a log file that outputs every milliseconds-per-frame, and the stuttering is not reflected by a change in the output times. For example, with vsync on, every single frame is 16.666 ms, which is exactly what you'd expect. Yet the stuttering is still extremely visible, meaning that the single thread is not making any blocking calls. I get a similar behavior without vsync--very low, consistent time-per-frame numbers that don't change even when it's stuttering.

Dred_furst
Nov 19, 2007

"Hey look, I'm flying a giant dong"

Orzo posted:

You know, I think I had a longer post written out originally which had more detail, and then I rewrote it and forgot to mention a few things, sorry. Maybe this will clarify:

The framerate is consistent. I've written to a log file that outputs every milliseconds-per-frame, and the stuttering is not reflected by a change in the output times. For example, with vsync on, every single frame is 16.666 ms, which is exactly what you'd expect. Yet the stuttering is still extremely visible, meaning that the single thread is not making any blocking calls. I get a similar behavior without vsync--very low, consistent time-per-frame numbers that don't change even when it's stuttering.

I uh, hope your logger is off thread then, or included in frame times because hitting a disk like that could cause micro stuttering. It could be a threading issue between objects but I am not so sure. Sorry I haven't been much help with this one. Are you running an AMD card and have you updated the drivers? because there was some dumb micro stuttering bug with AMD cards and older drivers. It got fixed recently as far as I know.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!

Dred_furst posted:

I uh, hope your logger is off thread then, or included in frame times because hitting a disk like that could cause micro stuttering. It could be a threading issue between objects but I am not so sure. Sorry I haven't been much help with this one. Are you running an AMD card and have you updated the drivers? because there was some dumb micro stuttering bug with AMD cards and older drivers. It got fixed recently as far as I know.
I'm not logging to disk until I close the application. I'm just appending times to a list of strings and then outputting them at the end of the application. Trust me, it has nothing to do with the logging--it happens without it.

It also (probably) has nothing to do with my objects, I've duplicated the issue with a single-triangle test application that generates no new objects, garbage, or anything.

I think it actually might be related to nvidia cards only, my sample size is really small but two people with AMD cards that have tested it reported no issues. The only machines I currently have personal access to are nvidia machines though, and all 3 have stuttered. Anyway thanks for your help, no need to apologize for not knowing the answer. I think I might try to elevate this one to the OpenTK team or something.

Orzo fucked around with this message at 23:29 on Jul 12, 2013

Tres Burritos
Sep 3, 2009

Orzo posted:

I'm not logging to disk until I close the application. I'm just appending times to a list of strings and then outputting them at the end of the application. Trust me, it has nothing to do with the logging--it happens without it.

It also (probably) has nothing to do with my objects, I've duplicated the issue with a single-triangle test application that generates no new objects, garbage, or anything.

I think it actually might be related to nvidia cards only, my sample size is really small but two people with AMD cards that have tested it reported no issues. The only machines I currently have personal access to are nvidia machines though, and all 3 have stuttered. Anyway thanks for your help, no need to apologize for not knowing the answer. I think I might try to elevate this one to the OpenTK team or something.

I was pretty sure OpenTK was no longer active though?

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
It's not in active development, but it's stable. Plus, the Monogame folks depend on it, so any critical issues are generally also issues to the Monogame devs. So, you could say it's partially active via them.

Boz0r
Sep 7, 2006
The Rocketship in action.
Can someone refer me to some introductary articles on fluids in OpenGL. I found a cool paper on Position Based Fluids but it's way too advanced.

unixbeard
Dec 29, 2004

Jos Stam wrote a bunch of papers on fluids starting in the late nineties, probably the best place to go if you're looking to start from square one. Otherwise smoothed particle hydrodynamics look kinda similar, you can find reference implementations in javascript/webgl floating round as well

unixbeard
Dec 29, 2004

Does anyone know a good book on intermediate/advanced opengl programming? I'd like something that covers commonly used techniques that are beyond introductory basics, so stuff like SSAO, volumetric rendering, etc. I know I can find lots of info on the web about this stuff but I'd like it if there was a book I could just work through that has consistent writing and code style. I've been reading through realtime rendering which is great, so something around that level but perhaps less comprehensive and with more code.

Zerf
Dec 17, 2004

I miss you, sandman

unixbeard posted:

Does anyone know a good book on intermediate/advanced opengl programming? I'd like something that covers commonly used techniques that are beyond introductory basics, so stuff like SSAO, volumetric rendering, etc. I know I can find lots of info on the web about this stuff but I'd like it if there was a book I could just work through that has consistent writing and code style. I've been reading through realtime rendering which is great, so something around that level but perhaps less comprehensive and with more code.

Try any of the GPU Gems, ShaderX or GPU Pro books. They're not OpenGL specific but they have some really neat articles. The GPU Gems are also available for free at Nvidia(https://developer.nvidia.com/content/gpu-gems-3 for example), so you can see if there's something you are interested in. I'd also rate the GPU Gems series higher than the others in terms of quality, but it was a while ago since the latest release.

I don't know if there is such a book you are asking for, usually more advanced stuff like this you learn from the web, articles and/or talks at SIGGRAPH/GDC.

OzyMandrill
Aug 12, 2013

Look upon my words
and despair

unixbeard posted:

Does anyone know a good book on intermediate/advanced opengl programming? I'd like something that covers commonly used techniques that are beyond introductory basics, so stuff like SSAO, volumetric rendering, etc. I know I can find lots of info on the web about this stuff but I'd like it if there was a book I could just work through that has consistent writing and code style. I've been reading through realtime rendering which is great, so something around that level but perhaps less comprehensive and with more code.

Tgbh, beyond the basic level, DX10 & GL is much of a muchness.
Both use shaders with CG, so the advanced stuff tends to be less code & more generic waffle, maybe some CG code if you are lucky, math formula if not. They usually expect you to know how to set up the constants/textures/rendertargets/streams/etc. yourself.

Quinton
Apr 25, 2004

I'm poking around with GL 3.3+ and doing some super-simple engine hacking just to understand how everything's put together. I'm wondering if there are any sources of free/opensource/CC/etc 3d character/npc models that include animations. Something suitable for putting my code through its paces and using as placeholder art.

Blendswap has a bunch of decent looking rigged models, but none of them seem to include animations (eg, idle, a walk loop, an action or two).

e: oops... meant to post to the gamedev megathread but missed...

e2: and only a few minutes after posting this, I stumbled over opengameart.org which has a bunch of stuff, including things like this guy, who has a handful of animations, and I've been able to export from blender in a useful state:

http://opengameart.org/content/animated-archer

Quinton fucked around with this message at 17:43 on Aug 20, 2013

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
I have a bunch of triangles (3 vertices) and their face normals (1 vertex). Now when I go to render, I set up a VAO with position bound to the first attribute and stick my VBO full of vertices in. I then set up an index buffer that I calculate from the list of triangles and vertices.

Now I want to use those normals, but they are per-face normals.

1) Can I have OpenGL do the heavy lifting to calculate the per vertex normals or do I have to manually do it? I already maintain a vertex -> triangle map when I calculate the indices, so this is tedious but doable

2) Is there a way of communicating my face normal to the appropriate fragments in the fragment shader? From what I understand, OpenGL interpolates the normals from the vertices of the primitive when it gives that data to the fragment shader

I'm totally new to 3d graphics so I'm probably doing this all wrong.

haveblue
Aug 15, 2005



Toilet Rascal

Malcolm XML posted:

1) Can I have OpenGL do the heavy lifting to calculate the per vertex normals or do I have to manually do it? I already maintain a vertex -> triangle map when I calculate the indices, so this is tedious but doable

That isn't something it can help you with, unfortunately.

quote:

2) Is there a way of communicating my face normal to the appropriate fragments in the fragment shader? From what I understand, OpenGL interpolates the normals from the vertices of the primitive when it gives that data to the fragment shader

Yeah, per-vertex data is the way to go in the vast majority of normal cases. You'll have to set up the vertex and fragment variables to include the normal data and then OpenGL will do what you expect.

OzyMandrill
Aug 12, 2013

Look upon my words
and despair

Malcolm XML posted:

1) Can I have OpenGL do the heavy lifting to calculate the per vertex normals or do I have to manually do it? I already maintain a vertex -> triangle map when I calculate the indices, so this is tedious but doable
It is possible, but you really don't want to do it!
The vertex shader only ever has a SINGLE vertex of information to go on, & a small number of constants that get used for every vertex of that mesh. So you need to supply per vertex enough information to do the calc - i.e. the face normals of all neighbouring faces and a 'weight' that describes how much they influence this vertex normal (usually the sine of the angle the triangles point makes at this vert). The shader will then be doing quite a lot of work munging & renormalising all this for every vertex tho.

A general principle for getting good performance - if the value doesn't change every frame, then pre-calculate it & just send it as one of the vertex streams.

If you are making meshes programatically, urk! Best to generate a simple (unnormalised) vertex normal cheaply, and let the vertex shader normalise it (it will be faster than you as normalising is usually one of the more optimised bits of the vertex shader hardware)

Malcolm XML posted:

2) Is there a way of communicating my face normal to the appropriate fragments in the fragment shader? From what I understand, OpenGL interpolates the normals from the vertices of the primitive when it gives that data to the fragment shader
Yes - you duplicate every vertex so:
code:
 1---2
 | / |
 3---4
Tri 1: 1-2-3
Tri 2: 3-2-4
becomes in a vertex buffer:
code:
 vert 1: position(1), normal(tri_1),
 vert 2: position(2), normal(tri_1),
 vert 3: position(3), normal(tri_1),

 vert 4: position(3), normal(tri_2),
 vert 5: position(2), normal(tri_2),
 vert 6: position(4), normal(tri_2),
Across the face, the fragment will have the face normal (it will be interpolating it, but always getting the same result)
You can't really just bung face normals in as per-vertex normals without careful thought, but that depends how much you care about some deviations in lighting, as it might actually not look too bad, especially if it is fast action.

Malcolm XML posted:

I'm totally new to 3d graphics so I'm probably doing this all wrong.
The thing to remember here is that memory is cheap, calculations on the GPU instructions are not!
Do the work before-hand (preferably pre-process when generating your game data so it is not done every time when it loads the models) and even if it means having more data per vertex (like these duplicated normals), it saves time in the GPU & that is the important bit.

If you want to play around with this stuff, then try something like nVidia FX Composer, which is very old & shonky - but actually pretty simple to use. There are plenty of presets to make a rough scene with a shader, and then you can edit the shader, hit a button, and see the results. For an absolute beginner to shaders it makes a good sandbox for trying stuff out.

Edit: I'm sure there are other similar shader testbed apps around too, maybe one that has been updated in the last 6 years even, so if anyone can reccomend one?

OzyMandrill fucked around with this message at 09:55 on Aug 29, 2013

Jewel
May 2, 2009

Also these days generally you can either quickly generate normals for a dynamic mesh you're generating at runtime or store the normals in a model you're loading in. No reason not to use a good model format these days. Storage space isn't really an issue, and for some console games where you have a huge disk to work with, it doesn't matter how big your file sizes are really as long as it saves processing time, so you can pack them full with normals and tangent data and whatever without much problem. You can generate normals at load time if an object doesn't have any, but really, just save them in a format that does.

baldurk
Jun 21, 2005

If you won't try to find coherence in the world, have the courtesy of becoming apathetic.
I posted this in the general game jobs thread in games since I'm subscribed there, and I completely forgot there was a thread in here for graphics. This will in fact be the last time I post this since I don't want to spam around everywhere but I figured it's another shot at getting any bites.

I'm writing a tool for graphics programmers on Windows + DX11 and I'm looking to get some external feedback from people. You don't necessarily have to be currently working on a DX11 project - although that would be ideal - I'm just after people familiar with graphics who can run a DX11 app and give me some field testing of the tool.

If that interests you and you've got some time to spare then please shoot me a mail: baldur AT crytek.com.

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Yuck so either I have to abandon my index buffers and just load my STL file into the VBO or spend the time to make per-vertex normals and get the bandwidth saving of the index buffer. Well, STL is terrible so I guess I'll take the time to calculate the vertex normals.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply