Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Have you tried doing the thing I edited in about sorting first then inverting?

Adbot
ADBOT LOVES YOU

Raenir Salazar
Nov 5, 2010

College Slice

Joda posted:

Have you tried doing the thing I edited in about sorting first then inverting?

Seems to be inverted:



Like so?
code:
float exp = 5.0;

float interp1 = pow(abs(a[0] - 1),exp);
float interp2 = pow(abs(a[1] - 1),exp);
float interp3 = pow(abs(a[2] - 1),exp);

tempColor = (interp1*lookup[b[0]] + 
	interp2*lookup[b[1]] +
	interp2*lookup[b[2]]) / (interp1 + interp2 + interp3);

float interp0 = pow(a[0], 1);

return lerp(tex2D(G,N).rgb, tempColor, interp0);

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
That looks a lot more reasonable, you can probably increase the exponents again now.

Did you sort so the smallest value is the most significant?

Raenir Salazar
Nov 5, 2010

College Slice

Joda posted:

That looks a lot more reasonable, you can probably increase the exponents again now.

Did you sort so the smallest value is the most significant?

Yeah I forgot to take the most significant value and also do the abs(dist - 1) there and voila:



Which is good, because shader forge doesn't seem to give you the MVP sub matrices as nodes in a clear fashion (which means I'd have to try writing it manually).

Thank you Joda this seem to work perfectly! :D

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Great! Sorry that took like a week and more than a page, but at least we got there :D. Next time I'll probably test out my solution before posting.

Raenir Salazar
Nov 5, 2010

College Slice

Joda posted:

Great! Sorry that took like a week and more than a page, but at least we got there :D. Next time I'll probably test out my solution before posting.

It's not a problem, I work full time so I only have a few hours here and there.

But yes! Victory!

My current TODO is to now find or create more hexagon compatible shaders; I notice there's kinda seams where they don't match up well if the texture is rather large like the stone texture there the seams are really apparent.

And probably to gradually, now that I have interpolation working, to work on Xerophyte's solution of passing in an array of all the textures. Then I can be sure of decent performance even as the map gets arbitrarily large.

Also I should see if I can get the geometry part working too, that might also be important for performance. :D

The final result we can post somewhere like reddit so this knowledge is never lost to the Tech Priests. :)

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I was actually planning to implement that int texture containing indices for a texture array I originally posted about (which is why my geom shader has those commented out grid_x grid_y values) since I got curious how it'll work and it'll make a nice addition to my portfolio for when I have to apply for my master's. You're welcome to ask any questions that come up in regards to shaders, conceptual or low-level stuff (either here or PM), or to compare notes.

Raenir Salazar
Nov 5, 2010

College Slice
So my next question is about creating a texture to use as an array of values. I assume I can create a texture that has integer values from 0 to X, 0 to Y, and that I can read these values at random.

I assume to read the value, I would use tex2D(myTex, myCoords.xy).r or something to get a single int value that corresponds to my texture id right?

So the only step then is to figure out how I can create a 2D Texture of any dimensions?

Edit: http://docs.unity3d.com/ScriptReference/Texture2D.SetPixel.html Probably this way and just create values at int's X,Y and hopefully can ignore everything else?

e: On a side note fixing the UV's in blender was all I needed to get this to work for a 3D model and not just a flat thing.

Raenir Salazar fucked around with this message at 00:15 on Aug 28, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm implementing the index thing atm, but I have a quick addendum to the interpolation stuff. The values as we ended up with gave some very sharp edges around the corners once I got a grid working (and I have no idea why,) so I changed it to be a four way weighted average with the main colour always having weight 1.

code:
float expon = 5.0f;

float interp1 = pow(sig_sides[0].value,expon);
vec4 col1 = sideColors[sig_sides[0].index];

float interp2 = pow(sig_sides[1].value,expon);
vec4 col2 = sideColors[sig_sides[1].index];

float interp3 = pow(sig_sides[2].value,expon);
vec4 col3 = sideColors[sig_sides[2].index];

fragColor = (mainColor + interp1*col1 + interp2*col2 + interp3*col3)/
            (1 + interp1 + interp2 + interp3);
(The grid is currently just random colours.)

Before:

After:


If you want sharper edges just increase expon like before.

Joda fucked around with this message at 06:22 on Aug 28, 2015

Raenir Salazar
Nov 5, 2010

College Slice
Cool! I'll apply it once I'm home; that's perfect.

Right now I'm struggling with whether to completely redo how I generate the map.

Right now I general a grid that's shaped like a giant hex; I had two ways of doing this, either in a spiral, or in rows; both are kinda complicated because I'm also making the tiles a graph. The in game objects have a reference to all surrounding tiles to make look ups easier.

The website linked earlier that I've been using from before actually gives some functions to convert between cube coordinates (which I currently use) and axial/offset coordinates but I think I still end up with a problem in that I have negative coordinates, so a straight up 2D array to loop through and create a texture from doesn't really work per se. In general it seems like the solutions are really complex (such as remapping my grid to be from (0 to 2*n) and it'd be easier to simply use a offset coordinate system.

On the other hand I liked the giant hexagon as giving more natural looking terrain.

Grrrr.

Edit: How do I actually convert from a pixel/worldspace coordinate to cube coordinate? How do I know which hex I'm in? Does my grid need to be centered to work?

Edi2: Xerophyte's solution seems to assume this is is the case, I might need to fiddle with my grid if that's the case.

Edit3: Hrmpth, I'm pretty sure vertex positions is mostly a no-go, how do I differentiate between two hex's when the points overlap? I guess I need to use a geometry shader and find the center vertex?

Raenir Salazar fucked around with this message at 22:26 on Aug 28, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I got my grid working with an array texture. I still haven't set up any UVs, so I just upload three uniformly coloured layers and sample a single point. I moved the significance sorting into the geometry shader since sorting in a fragment shader is a big no-no, but you should probably stick with the old solution to avoid being overwhelmed. Also, like I said I only know how to do this with low-level OpenGL. I can't answer anything Unity specific.

Raenir Salazar posted:

Edit: How do I actually convert from a pixel/worldspace coordinate to cube coordinate? How do I know which hex I'm in? Does my grid need to be centered to work?

Edi2: Xerophyte's solution seems to assume this is is the case, I might need to fiddle with my grid if that's the case.

Edit3: Hrmpth, I'm pretty sure vertex positions is mostly a no-go, how do I differentiate between two hex's when the points overlap? I guess I need to use a geometry shader and find the center vertex?

My solution from the base of it is to upload points in a hexagonal grid like so:

C++ code:
   for(int i = 0; i < dimensions; ++i) {
        for(int j = 0; j < dimensions; ++j) {
            double offsetY = (i % 2 == 1) ? 0.86602540378 : 0.0;
            points[i + dimensions*j] = glm::vec2(-i*1.5f,j*1.73205080757 + offsetY);
        }
    }
I'll have to revisit this at some point, since the indexing is kinda wonky, but that's the basic idea.

which I upload as a vertex buffer for a vertex array object. The hexagons don't exist in memory but are in stead generated each frame by the geometry shader. I use the current vertex index to determine grid position in the geometry shader. For instance, if I have a 10*10 grid, my indices go from 0 to 99. To get the grid x and y I need to divide by 10 to get one and modulo 10 to get the other. (That is assuming you uploaded the vertices as a flat array representing multiple dimensions.

This is the part in the geometry shader where I populate the neighbour texture index array.
code:
int grid_x = idx[0] % grid_dim;
int grid_y = idx[0] / grid_dim;

mainColor = texture(texture_uniform,ivec2(grid_x,grid_y)).r;

sideColors[0] = texture(texture_uniform,ivec2(grid_x + 1,
                                        (idx[0] % 2 == 1) ? grid_y + 1 : grid_y)
                                        ).r;

sideColors[1] = texture(texture_uniform,ivec2(grid_x,grid_y + 1)).r;

sideColors[2] = texture(texture_uniform,ivec2(grid_x - 1,
                                        (idx[0] % 2 == 1) ? grid_y + 1 : grid_y)
                                        ).r;

sideColors[3] = texture(texture_uniform,ivec2(grid_x - 1,
                                             (idx[0] % 2 == 1) ? grid_y : grid_y - 1)
                                             ).r;

sideColors[4] = texture(texture_uniform,ivec2(grid_x,grid_y - 1)).r;
sideColors[5] = texture(texture_uniform,ivec2(grid_x + 1,
                                         (idx[0] % 2 == 1) ? grid_y : grid_y - 1)
                                         ).r;

sideColors contains integer indices for each of the 6 sides. texture_uniform here is an isampler2DRect, which is a sampler into an integer sampler containing random values between 0 and 2. idx is determined in the vertex shader, by assigning the value gl_VertexID to it.

And this is the final fragment shader (all it needs is UVs):
code:
#version 330

in vec3 hex_coord;  //Contains interpolation values for (AF/CD,AB/DE,BC/EF)
flat in int mainColor;      //Contains array index of primary texture
flat in int sideColors[6];  //Contains array indices of 6 bordering textures
flat in int sig_sides[3];   //Expresses which indices are currently the 3 most significant.

uniform sampler2DArray tex_array;

out vec4 fragColor;

void main() {
    float expon = 10.0f;

    vec2 UV = vec2(1.0,1.0);

    vec4 col0 = texture(tex_array,vec3(UV,float(mainColor)));

    float interp1 = pow(hex_coord.x,expon);
    vec4 col1 = texture(tex_array,vec3(UV,float(sideColors[sig_sides[0]])));

    float interp2 = pow(hex_coord.y,expon);
    vec4 col2 = texture(tex_array,vec3(UV,float(sideColors[sig_sides[1]])));

    float interp3 = pow(hex_coord.z,expon);
    vec4 col3 = texture(tex_array,vec3(UV,float(sideColors[sig_sides[2]])));

    fragColor = (col0 + interp1*col1 + interp2*col2 + interp3*col3)/
                (1 + interp1 + interp2 + interp3);
}
The final geometry shader is here.

Produces this:


I'll have to refactor stuff, because some of the code is unreadable and I do some dumb poo poo, and I still need to generate UVs, but that should explain the basic idea behind it.. I hope.

E: Also, using the geometry shader like this is a bad idea, since it's not very efficient. Ideally it should be phased out and replaced by vertex attributes. I'm getting draw times of 0.8ms on the 10x10 grid, which is really awful.

Joda fucked around with this message at 00:49 on Aug 29, 2015

lord funk
Feb 16, 2004

This isn't strictly 3D graphics related, but how do you connect your renderer to your models once you have a larger project (many variable objects, maybe different scenes, etc.)? Lots of sample code I'm learning from just tosses a model into the renderer file that manages the graphics, but that seems totally wrong.

Is there a good structural paradigm that everyone uses?

Raenir Salazar
Nov 5, 2010

College Slice
Super hardcore procrastinating but I have an idea I'll try out. I think if gl_position works the way I think it does I can just check for a vertex at local model position 0,0,0 and then check it's world position.

Then convert it into axial coordinates and I think we're good.

lord funk posted:

This isn't strictly 3D graphics related, but how do you connect your renderer to your models once you have a larger project (many variable objects, maybe different scenes, etc.)? Lots of sample code I'm learning from just tosses a model into the renderer file that manages the graphics, but that seems totally wrong.

Is there a good structural paradigm that everyone uses?

Which language? Once you start basically crafting your own mini-graphics engine at that you you start looking at design patterns and programming paradigms your language best supports (C#/C++ -> OOP) and then construct a general system you can invoke to handle graphics.

This guy has something similar to what I'm referring to, check out I think the Assimp tutorial to see his usage of "Mesh" class in his overall structure.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

Raenir Salazar posted:

Super hardcore procrastinating but I have an idea I'll try out. I think if gl_position works the way I think it does I can just check for a vertex at local model position 0,0,0 and then check it's world position.

gl_Position has no initial value when shading begins. You assign the window-space (or whatever, I can't remember the proper terms for the different spaces along the pipeline) transformed vertex to it (e.g. gl_Position = PVM * vertexIn.) I may be wrong, but I don't think you can use it to deduce anything about model space without some seriously costly per-fragment matrix multiplications and that's probably not worth it.

Joda fucked around with this message at 03:48 on Aug 30, 2015

Raenir Salazar
Nov 5, 2010

College Slice

Joda posted:

gl_Position has no initial value when shading begins. You assign the window-space (or whatever, I can't remember the proper terms for the different spaces along the pipeline) transformed vertex to it (e.g. gl_Position = PVM * vertexIn.) I may be wrong, but I don't think you can use it to deduce anything about model space without some seriously costly per-fragment matrix multiplications and that's probably not worth it.

Might have been thinking of glvertex then; however I am in luck, shader forge has a node ("Object Pos") that gives me the object's pivot point in world space; which is what I needed and close enough to the center of the mesh to give me a good idea of where the hex is located.

How Shader forge implements this I have no idea but it gives me what I need.

gonadic io
Feb 16, 2011

>>=
Does anybody here have experience with JMonkeyEngine 3? I'm running into some peculiar behaviour where I can't actually remove a child from a node in certain conditions (which I've yet to have figured out exactly). Here's the result of my print statements before and after running the removal command for a few different children:

code:
parent's (svoSpatial (Node)) child named 1 before detaching is 1 (Geometry)
that child's parent before detaching is svoSpatial (Node)
parent's (svoSpatial (Node)) child named 1 after detaching is null
that child's parent after detaching is null

parent's (1 (Node)) child named 7 before detaching is 7 (Geometry)
that child's parent before detaching is 1 (Node)
parent's (1 (Node)) child named 7 after detaching is null
that child's parent after detaching is null


parent's (svoSpatial (Node)) child named 5 before detaching is 5 (Geometry)
that child's parent before detaching is svoSpatial (Node)
parent's (svoSpatial (Node)) child named 5 after detaching is 5 (Geometry) <-- !!!
that child's parent after detaching is null


parent's (1 (Node)) child named 7 before detaching is 7 (Node)
that child's parent before detaching is 1 (Node)
parent's (1 (Node)) child named 7 after detaching is null
that child's parent after detaching is null


I've tried using detachChild, removeFromParent and detachChildNamed which all give the same results - in that third case, the child named "5" is still reachable from its parent after it's supposed to have been detached.

The actual code is on github if somebody wants to look. It's Scala but I would be very surprised if the language was causing the issue here as it works fine in most of the cases.

e: I wonder if all the nodes in the scene graph need to have unique names, and there's another node called "5" somewhere else in the program which is causing weird results. That could be it?

e2: According to slide 52 in http://wiki.jmonkeyengine.org/tutorials/scenegraph/assets/fallback/index.html getChild is recursive and can return results that are not in getChildren (which just returns the immediate children of that node). I bet that's it. Thanks thread!

gonadic io fucked around with this message at 07:46 on Aug 30, 2015

Raenir Salazar
Nov 5, 2010

College Slice
Alrighty, so with the World Position node in Shaderforge I took the formula that is meant to convert pixels to Hex coordinates to convert the worldspace coordinate of the hex to it's axial coordinate and then to it's cube coordinate.

This seems to work spot on except for the signs being flipped for some strange reason.

Makin' progress. I just need now the one last small step of recontextualizing my shader to choose textures based on the map texture lookup instead of the static references to what the delta textures used to be.

Raenir Salazar fucked around with this message at 04:03 on Aug 31, 2015

Raenir Salazar
Nov 5, 2010

College Slice
Okay, hit a wall since I can't effectively debug my output beyond some vague sense of color.

So what I do in my program and pass to my shader is a 2D texture where each pixel is stored a cube coordinate.

I loop through my dictionary map of my tiles and store their coordinate into my pixel and then assign the alpha channel a number above 1 to store the texture ID.

Then I pass the shader the texture.

Here's my code, explanations following:
code:
// E: AB - R
// F: ED - L
// A: EF - BL
// B: BC - TR
// C: CD - TL
// D: AF - BR

float3 hexDeltas[6];

// deltas in order of Top Right to Top Left going clockwise
hexDeltas[0] = int3(1,0,-1); // TR
hexDeltas[1] = int3(1,-1,0); // R
hexDeltas[2] = int3(0,-1,1); // BR
hexDeltas[3] = int3(-1,0,1); // BL
hexDeltas[4] = int3(-1,1,0); // L
hexDeltas[5] = int3(0,1,-1); // TL

// Firstly, convert given x,y,z coordinates in worldspace to axial/cube coordinates.
// Size equals 1 because by hand that's correct, but if fed to the shader and I do the calculations
// within the shader I get the completely wrong result.
float mSize = 1;

float q;
float r;

// World Space to Hex conversion: 
//[url]http://www.redblobgames.com/grids/hexagons/#pixel-to-hex[/url]
float2x2 matA = { sqrt(3)/3, -1/3,
	 				  0, 2/3 };

float1x2 matB = { HexOrigin.x, 
		  HexOrigin.z };

float1x2 matQR = mul(mul(matA,matB), 1/(-mSize));

q = matQR[0][0];
r = matQR[0][1];

// z because worldspace from the camera angle is X,Z instead of X,Y.
// Although strangely I need to flip x and y here and multiply by
// -size to get correct values.

float3 HexOriginCubeCoords = float3(-q-r,q,r);

// now we round..
// I'm not sure if these work.
// If I skip this step I find less tiles.
int rx = round(HexOriginCubeCoords.x);
int ry = round(HexOriginCubeCoords.y);
int rz = round(HexOriginCubeCoords.z);

float x_diff = abs(rx - HexOriginCubeCoords.x);
float y_diff = abs(ry - HexOriginCubeCoords.y);
float z_diff = abs(rz - HexOriginCubeCoords.z);

if (x_diff > y_diff && x_diff > z_diff)
{
	rx=-ry-rz;
}
else if (y_diff > z_diff)
{
	ry = -rx-rz;
}
else
{
	rz = -rx-ry;
}

int3 myHex = int3((rx),(ry),(rz));

// We now know for sure "where" our tile is in cube coordinates.
// Now we need to find out which textures are being used in neigbhouring hexes.

// Textures for current hex is mTexases[0]
// All surrounding hexes are from 1 to 6 clockwise TR to TL.
int mTexases[7]; 

float3 test = float3(0,0,0);
// ID:0 is reserved as "our" texture of the current hex.
// 1 to 6 go from top right clockwise to top left.
// Every hex has it's own texture, find out current hex and track it's texture.
// Loop for each pixel and check it's r,g,b aak it's xyz and compare with myHex.xyz
// If they match check the alpha channel where we store it's texture ID.

for (int m=0; m<Radius; m++)
{
    for (int k=0; k<Radius;k++)
    {
      float ecks = (k/Radius);
      float why = (m/Radius);
      if (round(tex2D(TexMap, float2(ecks,why)).x) == myHex.x
         && round(tex2D(TexMap, float2(ecks,why)).y) == myHex.y
         && round(tex2D(TexMap, float2(ecks,why)).z) == myHex.z)
      {
        mTexases[0] = tex2D(TexMap, float2(ecks,why)).a;
        test = float3(0,1,0);
        break;
      }
    }
}

// We do the same here but now we're also checking the surrounding hexes.
bool isFound = false;
for (int c=1; c < 7; c++)
{
  isFound = false;
  for (int m=0; m<Radius; m++)
  {
    for (int k=0; k<Radius;k++)
    {
      float ecks = k/Radius;
      float why = m/Radius;
      int3 temp = int3(hexDeltas[c-1].x + myHex.x, hexDeltas[c-1].y + 
      myHex.y, hexDeltas[c-1].z + myHex.z);
      if ((round(tex2D(TexMap, float2(ecks,why)).x) == temp.x)
          && (round(tex2D(TexMap, float2(ecks,why)).y) == temp.y)
          && (round(tex2D(TexMap, float2(ecks,why)).z) == temp.z))
      {
        mTexases[c] = tex2D(TexMap, float2(ecks,why)).a;
        test += float3(0,0,1);
        isFound = true;
      }
    }
  }
  if (!isFound)
  {
    mTexases[c] = 4;
  }
  
}
// The above strangely only works for the surrounding tiles of the only tile we find.
// This implies to me that the sampling code doesn't work as intended.
// Okay in theory we now have all of our surrounding textures.

// Blending code
float i,j;
float a[6];
int b[6];
float3 lookup[7];

float3 tempColor;
float iMin;
float iMax;
float temp = 0;
int intTemp = 0;
float interp;

// E: AB - R
// F: ED - L
// A: EF - BL
// B: BC - TR
// C: CD - TL
// D: AF - BR

a[0] = B; //
a[1] = E; //
a[2] = D; //
a[3] = A; //
a[4] = F; //
a[5] = C; //

b[0] = 0;
b[1] = 1;
b[2] = 2;
b[3] = 3;
b[4] = 4;
b[5] = 5;

// These are now the six textures the game currently 
// allows, will switch to texture atlas asap.
lookup[0] = tex2D(mHex, _UVs).rgb;
lookup[1] = tex2D(TR, _UVs).rgb;
lookup[2] = tex2D(R, _UVs).rgb;
lookup[3] = tex2D(BR, _UVs).rgb;
lookup[4] = tex2D(BL, _UVs).rgb;
lookup[5] = tex2D(L, _UVs).rgb;
lookup[6] = tex2D(TL, _UVs).rgb;

for (j=0;j<6;j++)
{
 iMin = j;
 for (i = j+1; i<6;i++)
 {
  if (a[i] < a[iMin]
)
  {
    iMin = i;
  }
 }

 if (iMin != j)
 {
  temp = 
a[j];
  a[j] = a[iMin];
  a[iMin] = temp;

  intTemp = 
b[j];
  b[j] = b[iMin];
  b[iMin] = intTemp;

  intTemp = mTexases[j+1];
  mTexases[j+1] = mTexases[iMin+1];
  mTexases[iMin+1] = intTemp;
 }
}

iMax = a[1];

float exp = 5.0;

float interp1 = pow(abs(a[0]-1),exp);
float interp2 = pow(abs(a[1]-1),exp);
float interp3 = pow(abs(a[2]-1),exp);

/*
tempColor = (interp1*lookup[b[0]+1] + 
	interp2*lookup[b[1]+1] +
	interp3*lookup[b[2]+1]) / (interp1 + interp2 + interp3);
*/
tempColor = (interp1*lookup[mTexases[1]] + 
	interp2*lookup[mTexases[2]] +
	interp3*lookup[mTexases[3]]) / (interp1 + interp2 + interp3);

float interp0 = pow(abs(a[0] - 1), 2.2);

//return lerp(lookup[mTexases[0]], tempColor, interp0);
return test;
//return float3(2/Radius,0,0);
//mSize /= 10;
//return float3(mSize / 2,0,0);
//return tex2D(TexMap, float2(0.6,0.1)).rgb;
//int3 temp2 = int3(hexDeltas[0].x + myHex.x, hexDeltas[0].y + 
//myHex.y, hexDeltas[0].z + myHex.z);
//return HexOrigin;
//return myHex;
//return float3(rx,ry,rz);
//return HexOriginCubeCoords;


Right now "return test" is just to see if I FIND any of the tiles.



Which weirdly I apparently only find 0,0,0 and... It's adjacent tiles Yes and No (It would be yellow if it found it properly, but it doesnt!?)?

The end result of my coordinate conversion scheming seems to work now:



Albeit for some weird interpolation that shouldn't exist.

The problem is despite seemingly getting accurate values for MyHex (aka the current Hex) when I try to do a comparison between any current hex and a given pixel it finds nothing.

I strongly suspect that tex2D( sampler2D, float2); isn't working or is interpolating my values somehow when I want to sample a specific pixel; that or I'm not actually at the correct pixel. I found that integers of x,y didn't seem to work and I think the coordinates are from 0 to 1.

Any suggestions?

e: My texture maping:



Edit: Aaaaah, I suspect an idea. I think my texture there is upside down, I'm trying to figure out an easy way of flipping it.

No wait, that shouldn't matter because it loops through every coordinate regardless.

Edit: I get this bizarre result if in the first for loop I have if X and Y equal 0 but z matches.



This doesn't make any sense at all.

The most frustrating thing has gotta be that I have no idea if anything is correct, for some reason Size = (Height / (3/4)) / 2 doesn't give me 1 in the shader with 1.5 passed in as Height. But does in Wolfram/by hand. I divide by 10 and is still bright red when outputted. I get different results when I have it as (Height * 1/(3/4)) / 2 which is really really frustrating and I can't determine if there's a bug.

Raenir Salazar fucked around with this message at 14:38 on Sep 2, 2015

Raenir Salazar
Nov 5, 2010

College Slice
The number of people on the Unity reddit thread who read my question on this topic and clearly not even read what my issue is and suggest advice I am already trying to implement but isn't working is mind boggling.

Right now I'm going to do a different implementation of my grid as offset coordinates with 0,0 to N,M bounds to avoid the issue of having negative indices. Then I can store the grid straight up as a 2D texture and read in a more direct fashion; avoid the issue of whether negative numbers aren't working in Cg Shading language and hopefully get better debug output.

e:

Here's my offset grid, no smoothing or randomness. I'm just going to push that until after I can get the shaders working.


So now my offset coordinates are all positive integers.

So lookups should be easy, take a point, convert to cube coordinates, add the delta to find the appropriate adjacent tile; convert that tile's coordinates back to Col/Row coordinates; lookup directly in texture.

No random N^2 searches!

Edit2: I have a very important question; when I read the RGBA of a texture, does it record the values or the colours? Is (1,0,0) distinct from (9,-1,-1) or is it treated the same such that (9,-1,-1) will be "read" as (1,0,0)?

Edit 3: Oh my, just when I asked that someone respond I think along similar lines:

quote:

I can't review your code in detail right now because I'm browsing with a phone. However if your texture data seems incorrect, make sure that your texture's filtering is set to "point". Otherwise you get wrong values because the gpu does interpolation between samples. Also make sure that you don't have any mipmaps autogenerated. Those screw up your data too. If you're storing negative numbers you'll have to use float textures. Or alternatively use value offset in your script and then reverse it in shader (like normal maps are stored)

Is this guy likely correct? Until reading that I was thinking I'd have to use RGBA as a binary byte to send up to 15 textures (0000, 0010, 0011, etc).

Raenir Salazar fucked around with this message at 03:51 on Sep 4, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Again, I only know how it works with OpenGL, but the RGBA 8-bit standard format will clamp values between 0 and 1. What happens in the shader won't change how a texture handles its values internally. Also, for indexing textures you need to do nearest filtering (i.e. take the nearest value, don't apply filtering.) With OpenGL this means that the internal format of the index texture is GL_R(X)UI (that is to say a single channel of (X) bits containing an unsigned int) and the filtering mode is GL_NEAREST. I have no idea what happens if you try to do linear filtering on an integer texture. If your internal format is a clamping format, as soon as you write to the texture, it will store the clamped value, meaning whatever value you were trying to store doesn't exist in the texture at all.

E: This is all assuming you're talking about the grid texture containing indices into the array texture. Also, he's also correct that you don't want mipmaps for this texture.

E2: If you want negative values replace GL_R(X)UI with GL_R(X)I (or unity equivalent)

Joda fucked around with this message at 06:40 on Sep 5, 2015

Raenir Salazar
Nov 5, 2010

College Slice
Well this is frustrating and I have no idea what's wrong.

If I manually plug in coordinates like so:

float3 mCol = tex2D(TexMap, float2(0.0,0.9)).rgb;

I can pin point the correct pixels BUT,

float3 mCol = tex2D(TexMap, float2(0.0,1.0)).rgb;

Goes too far and seemingly wraps around.

And something like this:
float2 mHexCoord = float2(u/(width- 1),v/(height- 1));

float3 mCol = tex2D(TexMap, mHexCoord).rgb;

I also do not get correct values.

UV coordinates for tex2D are supposed to be between 0 and 1 for the texture right?

This is accounting for the fact that I do know my UV's are likely upside down; but even accounting for that being the case it doesn't look like.

e: Documentation for Tex2D which should be identical to how it is in GLSL

Raenir Salazar fucked around with this message at 04:31 on Sep 8, 2015

Spatial
Nov 15, 2007

Raenir Salazar posted:

float3 mCol = tex2D(TexMap, float2(0.0,1.0)).rgb;

Goes too far and seemingly wraps around.

[...]

UV coordinates for tex2D are supposed to be between 0 and 1 for the texture right?
Just to be precise, it's a half-open range which includes zero but excludes 1.

In a 4x1 texture the X coordinates for each texel centre are 0, 0.25, 0.5 and 0.75.

Raenir Salazar
Nov 5, 2010

College Slice

Spatial posted:

Just to be precise, it's a half-open range which includes zero but excludes 1.

In a 4x1 texture the X coordinates for each texel centre are 0, 0.25, 0.5 and 0.75.

So instead of col/(size-1) I actually probably do want col/size? Or are some values like 1023/1024 too close to 1 for that to work?

Actually yeah that raises a good point, because if I started having larger sized textures for actual in game maps, and I'm at pixel 1022 out of 1023 (1024 total) the division results in a floating point value very close to 1 and I don't know if it's actually distinguishing to the precise pixel..

Raenir Salazar fucked around with this message at 17:19 on Sep 8, 2015

Sex Bumbo
Aug 14, 2004

Spatial posted:

Just to be precise, it's a half-open range which includes zero but excludes 1.

In a 4x1 texture the X coordinates for each texel centre are 0, 0.25, 0.5 and 0.75.

The x coordinates for the texture coord centers of a 4x1 texture are 0.125, 0.375, 0.625, and 0.875.

center = (texel coord + 0.5) / texture size

If you sample x=0 on a 4x1 texture with linear filtering and wrapping, the result is 0.5 * (texel 0) + 0.5 * (texel 3). It's halfway between those two texel centers.

Sex Bumbo fucked around with this message at 18:45 on Sep 8, 2015

Raenir Salazar
Nov 5, 2010

College Slice
Okay! I managed to track down the error after reducing the grid to a 5x1 row of hexes; this let me determine that "column" was more or less correct; so when I expanded it to 2 rows I could immediately see that the problem was with column. Turns out when converting from cube to even-r offset coordinates I accidentally made my row value equal to my cube x value instead of cube z.

After that it was just a matter of some trivial trial and error to confirm that yes, I do need to have it as 1 - (-col/(mapHeight - 1)).

Now all that remains is figuring out a slight error in accuracy; the second to last row of my grid has slightly inaccurate values (appears to be swapped with it's neighbour)... Fake edit which I solved by using Ceil(...) instead of round(..) for this line: float q = myHex.x + ceil(float(myHex.z + (int(myHex.z) % 1)) / 2);

Which comes from:
code:
# convert cube to even-r offset
col = x + (z + (z&1)) / 2
row = z
I'm a little confused as it isn't apparent to me that should return the highest possible integer should (z + (z&1))/2 be uneven.



and here's my map:


I found Paint.net lets me zoom in enough.

Here's something a little larger:



I don't see any errors so I think I am finally able to make progress!

Raenir Salazar
Nov 5, 2010

College Slice
Progress report, Solveable weirdness & hacks:



Two things,

1) For some reason my sides are flipped. Left is right, top left is top right etc. I did a simple hack to fix this but I really have no idea why this is so.

Here's my current code that gives the above image, you see I manually swapped my indexes for which hex direction I'm going to get them to line up.
code:
// E: AB - R
// F: ED - L
// A: EF - BL
// B: BC - TR
// C: CD - TL
// D: AF - BR

// Firstly, convert given x,y,z coordinates in worldspace to axial/cube coordinates.
float mSize = 1;

float column = ((HexOrigin.x * (float(sqrt(3)) / (3))) - (float(HexOrigin.z) / (3))) / mSize;
float row = (HexOrigin.z * (float(2) / (3))) / mSize;

// z because worldspace from the camera angle is X,Z instead of X,Y because reasons.

float3 HexOriginCubeCoords = float3(column,-column-row,row);

// now we round..

int rx = round(HexOriginCubeCoords.x);
int ry = round(HexOriginCubeCoords.y);
int rz = round(HexOriginCubeCoords.z);

float x_diff = abs(rx - HexOriginCubeCoords.x);
float y_diff = abs(ry - HexOriginCubeCoords.y);
float z_diff = abs(rz - HexOriginCubeCoords.z);

if (x_diff > y_diff && x_diff > z_diff)
{
    rx=-ry-rz;
}
else if (y_diff > z_diff)
{
    ry = -rx-rz;
}
else
{
	rz = -rx-ry;
}

int3 myHex_Cube = int3(rx,ry,rz);

// convert back to even-r
float q = myHex_Cube.x + ceil(float(myHex_Cube.z + (int(myHex_Cube.z) % 1)) / 2);
float r = myHex_Cube.z;

float2 mHexCoord = float2(q/(MapWidth - 1),1.0 - (-r/(MapHeight - 1)));

// We now know for sure "where" our tile is in cube coordinates.
// Now we need to find out which textures are being used in neigbhouring hexes.

int mTexases[7]; 

// ID:0 is reserved as "our" texture of the current hex.
// 1 to 6 go from top right clockwise to top left.
float3 _texLookupCol = tex2D(TexMap, mHexCoord).rgb;

// Main texture
mTexases[6] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

float2 directions_offset[2][6];

directions_offset[0][0] = float2(1,-1); // TR
directions_offset[0][1] = float2(1,0);  // R
directions_offset[0][2] = float2(1,1);  // BR
directions_offset[0][3] = float2(0,1);  // BL
directions_offset[0][4] = float2(-1,0); // L
directions_offset[0][5] = float2(0,-1); // TL

directions_offset[1][0] = float2(0,-1); // TR
directions_offset[1][1] = float2(1,0);  // R
directions_offset[1][2] = float2(0,1);  // BR
directions_offset[1][3] = float2(-1,+1);  // BL
directions_offset[1][4] = float2(-1,0); // L
directions_offset[1][5] = float2(-1,-1); // TL

// Top Right
int parity = int(r) & 1;
float2 direction = directions_offset[parity][5];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[0] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Right

direction = directions_offset[parity][4];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[1] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Bottom Right
direction = directions_offset[parity][3];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[2] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Bottom Left
direction = directions_offset[parity][2];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[3] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Left
direction = directions_offset[parity][1];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[4] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Top Left
direction = directions_offset[parity][0];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[5] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);
// Okay in theory we now have all of our surrounding textures.


float i,j;
float a[6];
int b[6];
float3 lookup[7];

float3 tempColor;
float iMin;
float iMax;
float temp = 0;
int intTemp = 0;
float interp;

// E: AB - R
// F: ED - L
// A: EF - BL
// B: BC - TR
// C: CD - TL
// D: AF - BR

a[0] = B; //
a[1] = E; //
a[2] = D; //
a[3] = A; //
a[4] = F; //
a[5] = C; //

// These are now the six textures the game currently 
// allows, will switch to texture atlas asap.
lookup[0] = tex2D(_Tex0, _UVs).rgb; //grass
lookup[1] = tex2D(_Tex1, _UVs).rgb; // 
lookup[2] = tex2D(_Tex2, _UVs).rgb;
lookup[3] = tex2D(_Tex3, _UVs).rgb; // clay
lookup[4] = tex2D(_Tex4, _UVs).rgb;
lookup[5] = tex2D(_Tex5, _UVs).rgb; // desert
lookup[6] = tex2D(_Tex6, _UVs).rgb; // forest

for (j=0;j<6;j++)
{
 iMin = j;
 for (i = j+1; i<6;i++)
 {
  if (a[i] < a[iMin]
)
  {
    iMin = i;
  }
 }

 if (iMin != j)
 {
  temp = a[j];
  a[j] = a[iMin];
  a[iMin] = temp;

  intTemp = mTexases[j];
  mTexases[j] = mTexases[iMin];
  mTexases[iMin] = intTemp;
 }
}

iMax = a[1];

float exp = 5.0;

float interp1 = pow(abs(a[0]-1),exp);
float interp2 = pow(abs(a[1]-1),exp);
float interp3 = pow(abs(a[2]-1),exp);

tempColor = (interp1*lookup[mTexases[0]] + 
	interp2*lookup[mTexases[1]] +
	interp3*lookup[mTexases[2]]) / (interp1 + interp2 + interp3);

float interp0 = pow(abs(a[0] - 1), 2.2);

return lerp(lookup[mTexases[6]], tempColor, interp0);
//return lookup[mTexases[6]];
2) The weird interpolation that doesn't very interpolate is probably because I haven't updated the code to the latest version you suggested, though I imagine a simple fix would be to establish a hierarchy of textures.

e You're later code implemented:


That looks amazing.

(The weird whiteness is just because I have a lovely water texture)

Raenir Salazar fucked around with this message at 03:54 on Sep 10, 2015

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Yay, good work and glad you got it working. Sorry I haven't been very helpful for the last couple questions; a lot of it was Unity specific, and I've been to busy with school to find the time to understand your shader properly.

Joda fucked around with this message at 13:41 on Sep 10, 2015

Raenir Salazar
Nov 5, 2010

College Slice

Joda posted:

Yay, glad you got it working. Sorry I haven't been very helpful for the last couple questions; a lot of it was Unity specific, and I've been to busy with school to find the time to understand your shader properly.

I probably didn't phrase them right but I don't think my questions were that Unity specific per se. :(

But yay! Much rejoicing! I thank my stars that I do have some little bit of the required problem solving skills required of a developer, once I decided to buckle down and ask "Okay, is there a pattern to why my shader isn't working?" and determined the flippedness was universal I could narrow down my trial and error fixes.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Whenever I implement lambertian shading I get these concentric circles on uniformly coloured, flat surfaces. I assume it's a consequence of only having 8-bits per colour channel, but I don't recall ever actually seeing them in professional applications. Is there something I'm doing wrong, or is this kind of artefacts just usually hidden by visual complexity (i.e. more polygons, normal mapping etc.)?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Joda posted:

Whenever I implement lambertian shading I get these concentric circles on uniformly coloured, flat surfaces. I assume it's a consequence of only having 8-bits per colour channel, but I don't recall ever actually seeing them in professional applications. Is there something I'm doing wrong, or is this kind of artefacts just usually hidden by visual complexity (i.e. more polygons, normal mapping etc.)?

could be a linear-sRGB color space problem

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I've never heard about that? My colour rendertargets are all RGBA8. Does OpenGL assume that what I write to the backbuffer is in sRGB?

Sex Bumbo
Aug 14, 2004
You specify an srgb texture explicitly in the same way you specify an rgb texture explicitly so unless you deliberately did it, you probably don't have one (E: and it might need to be enabled). Also, using linear lighting doesn't stop banding, it just shifts the bands around to less perceptible values.

If you take a screenshot and look at the values, they should be adjacent rgb colors, like 127, 127, 127 in one pixel and 128, 128, 128 in the next -- it just happens that your eyes pick up on the small difference. If they're not adjacent values there might be some bad format conversion or something going on. Also it happens more than it should in professional apps imo.

see:
http://loopit.dk/banding_in_games.pdf
https://www.shadertoy.com/view/MslGR8

Sex Bumbo fucked around with this message at 06:52 on Sep 14, 2015

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

Sex Bumbo posted:

You specify an srgb texture explicitly in the same way you specify an rgb texture explicitly so unless you deliberately did it, you probably don't have one (E: and it might need to be enabled). Also, using linear lighting doesn't stop banding, it just shifts the bands around to less perceptible values.

Working in linear doesn't cause banding, but switching back and forth between linear and sRGB (which it doesn't sound like he's doing) could. I've worked on projects before that ended up with weird banding issues because someone was careless with which post-processing stages were linear and which were sRGB when they tried to reduce the number of passes, for example.

Sex Bumbo posted:

If you take a screenshot and look at the values, they should be adjacent rgb colors, like 127, 127, 127 in one pixel and 128, 128, 128 in the next -- it just happens that your eyes pick up on the small difference. If they're not adjacent values there might be some bad format conversion or something going on. Also it happens more than it should in professional apps imo.

see:
http://loopit.dk/banding_in_games.pdf
https://www.shadertoy.com/view/MslGR8

Yeah, this is also good advice. If you're getting quantization issues, then you'll end up with 'gaps' between bands as mentioned (since the problem comes from projecting a format with lower precision in that range onto a format with higher precision).

Also, is your banding in the bright/medium/dark end of the range?

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe


The colours are definitely adjacent, so I guess it's just a consequence of lacking colour precision. If I remove the attenuation term from the lighting model they become a lot more pronounced. There're two other places with much brighter radiance that don't show the same artifacts, so I guess it's a problem with the lower ranges? I'll probably have to implement some dithering/non-linear colour spaces at some point to get around it.

Thanks for the help.

lord funk
Feb 16, 2004

I'm trying to render sharp edged, geometric shapes. It's generated vertex data (not loaded from a model), and I'm updating vertex positions each frame. So I have to calculate surface normals.

What I'd like are each of the triangles to appear flat and sharp-edged. What I have are nice smooth normal transitions:



It seems to me that I can't get sharp surface normals because I'm using indexed rendering (so my vertices are shared between surfaces). Do I have to stop using indexed rendering and instead just duplicate vertices? Or is there a trick to this that I don't know?

haveblue
Aug 15, 2005



Toilet Rascal
That's exactly it. Normals are a property of a vertex, not a polygon, so you can't have two polygons with two different normals share the same vertex record.

Xerophyte
Mar 17, 2008

This space intentionally left blank

lord funk posted:

I'm trying to render sharp edged, geometric shapes. It's generated vertex data (not loaded from a model), and I'm updating vertex positions each frame. So I have to calculate surface normals.

What I'd like are each of the triangles to appear flat and sharp-edged. What I have are nice smooth normal transitions:



It seems to me that I can't get sharp surface normals because I'm using indexed rendering (so my vertices are shared between surfaces). Do I have to stop using indexed rendering and instead just duplicate vertices? Or is there a trick to this that I don't know?

In GLSL you can use the flat keyword and in HLSL you can use the nointerpolation keyword to have a varying not be interpolated. Instead it'll be defined solely by what GL calls the provoking vertex of that primitive. You might be able to keep track of which vertex provokes what triangle, if you're lucky/good.

I'd recommend not doing that, though. Two vertices with different attribute data are not the same vertex and the APIs generally do not let you pretend that they are, for good reasons.

One thing you can do in GL is use the glVertexBindingDivisor command to make GL stride forward less often for some attributes than others when transferring the vertex data. It'll still generate 3 unique vertices per triangle for locality reasons, but it might be easier to code.

lord funk
Feb 16, 2004

Thanks for the info. I should probably point out I'm using Metal, but frankly it's good to get suggestions from more mature graphics APIs so I can dig around and see if there's anything similar.

Working with duplicated vertices:

Sex Bumbo
Aug 14, 2004

lord funk posted:

Thanks for the info. I should probably point out I'm using Metal, but frankly it's good to get suggestions from more mature graphics APIs so I can dig around and see if there's anything similar.

Working with duplicated vertices:


While not a "mature" API, metal is a modern API, so the patterns you'll fall into using it are far more likely to be more optimal and just plain better. Others might use less client code, but they're paying for it in extra driver code. Splitting your triangles is probably the best way to be doing this. There's ways to cut down on the data storage but all of them will be slower to process and there's no real point to doing them unless you're running low on memory.

Adbot
ADBOT LOVES YOU

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm getting some really weird results when I try to do multi-sampling for radiosity specifically on my desktop. On my laptop I get reasonable looking results with some nice colour-bleeding and stuff, but on my desktop I get a screen-full of horrible looking visual artefacts, and I have no clue where they're coming from. I tried disabling blending on the total radiosity render target, switching render target sample filtering from linear to nearest (it should have been nearest from the beginning anyway,) clearing the render targets before drawing to them and a bunch of other stuff I can't think of right now. If I cange the sample radius it appears that the rectangular artefacts become smaller in size, but they still fill the entire screen. They're even there when I sample just a single point with a radius of 1. (radii are in window-space.) I've been staring at this problem for a couple days now and I'm not getting any closer to an answer. What baffles me the most is that it works on one platform but gives these horrendous artifacts on the other.

I know my method for reconstruction works, because I use it for a lambertian shader earlier in the program without problem.

Here's how it looks:

40 samples, 300 radius

Note: the noise is green even if there's no green contribution in the scene.

This is how I sample my textures (they're all double-layered array-textures, which is why I tap each texture twice.)

code:
 for(int i = 0; i < SAMPLES; i++) {
        float sigma_i = (float(i) + 0.5)/float(SAMPLES);
        float theta_i = 2*M_PI*sigma_i*tau + phi;
        vec2 u_i = vec2(cos(theta_i),sin(theta_i));
        float h_i = RADIUS * sigma_i;

        ivec2 offSet = ivec2(h_i*u_i);

        vec2 offSetUV = fragUV.xy + vec2(float(offSet.x)/WIN_WIDTH, float(offSet.y)/WIN_HEIGHT);

        vec3 rad_1 = texture(prev_bounce,vec3(offSetUV,0.0)).rgb;
        float z_1 = texture(depth_texture,vec3(offSetUV,0.0)).r;
        vec3 pos_1 = regenPos(z_1,offSetUV);

        vec3 omega1 = pos_1 - mainPos;
        float geom1 = max(0,dot(normalize(omega1),mainNormal));



        vec3 rad_2 =  texture(prev_bounce,vec3(offSetUV,1.0)).rgb;
        float z_2 = texture(depth_texture,vec3(offSetUV,1.0)).r;
        vec3 pos_2 = regenPos(z_2,offSetUV);

        vec3 omega2 = pos_2 - mainPos;
        float geom2 = max(0,dot(normalize(omega2),mainNormal));



        if(geom1 > 0 && rad_1 != vec3(0,0,0)) {
            M += 1.0f;
            totalIrradiance += rad_1 * geom1;
        }
        if(geom2 > 0 && rad_2 != vec3(0,0,0)) {
            M += 1.0f;
            totalIrradiance += rad_2 * geom2;
        }
    }
This is how I initiate the render target textures:
C++ code:
glGenTextures(1, &radNext);
glBindTexture(GL_TEXTURE_2D_ARRAY,radNext);

glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);

glTexImage3D(GL_TEXTURE_2D_ARRAY,0,GL_RGBA8,width,height,2,0,GL_RGBA,GL_UNSIGNED_BYTE,0);
All textures have the same parameters and I never read from and draw to the same texture in a draw call (I triple checked.)

This is how I call the radiosity shader:
C++ code:
void GBuffer::applyBounce(Camera *cam, GLuint prevTexture, GLuint nextTexture, GLuint totTexture) {
     (...)
     glBindFramebuffer(GL_DRAW_FRAMEBUFFER, filterFBO);

    glDisable(GL_DEPTH_TEST);

    glFramebufferTexture(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,nextTexture,0);
    glFramebufferTexture(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,totTexture,0);

    GLuint buffers[2] = {GL_COLOR_ATTACHMENT0,GL_COLOR_ATTACHMENT1};

    glDrawBuffers(2,buffers);

    glEnablei(GL_BLEND,1);
    glBlendFunc(GL_ONE,GL_ONE);
    glBlendEquation(GL_FUNC_ADD);

    filter_shader.use();

    glBindVertexArray(quad.getVAO());

    glm::mat4 pvm = glm::ortho( (float) -width/2.0f,(float)width/2.0f,(float)-height/2.0f,(float)height/2.0f,0.1f,3.0f)*
                    glm::lookAt(glm::fvec3(0,0,1),glm::fvec3(0,0,-1),glm::fvec3(0,1,0))*
                    glm::scale(glm::vec3((float) width, (float) height,1.0f));

    glm::mat4 invProjection = glm::inverse(cam->getProjection());

    glUniformMatrix4fv(filter_shader.get_U_Location(Shader::U_M_PVM),1,GL_FALSE,glm::value_ptr(pvm));

    glUniformMatrix4fv(filter_shader.get_U_Location(Shader::U_M_VUNPROJECT),1,GL_FALSE,
                       glm::value_ptr(invProjection));

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D_ARRAY, currentState ? depths : depths2);
    glUniform1i(filter_shader.get_U_Location(Shader::U_I_DEPTH_TEX),0);

    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D_ARRAY,normals);
    glUniform1i(filter_shader.get_U_Location(Shader::U_I_NORM_TEX),1);

    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D_ARRAY,diffColors);
    glUniform1i(filter_shader.get_U_Location(Shader::U_I_DIFF_TEX),2);

    glActiveTexture(GL_TEXTURE3);
    glBindTexture(GL_TEXTURE_2D_ARRAY,prevTexture);
    glUniform1i(filter_shader.get_U_Location(Shader::U_I_PREVRAD),3);

    glActiveTexture(GL_TEXTURE4);
    glBindTexture(GL_TEXTURE_2D,randTex);
    glUniform1i(filter_shader.get_U_Location(Shader::U_I_NOISE_TEX),4);

    glDrawElements(GL_TRIANGLES,quad.getNoIndices(),GL_UNSIGNED_INT,(void*) 0);

    glDisablei(GL_BLEND,1);
    glDisable(GL_BLEND);

    glEnable(GL_DEPTH_TEST);

    glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
}
Any help would be greatly appreciated.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply