Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Raenir Salazar
Nov 5, 2010

College Slice
Progress report, Solveable weirdness & hacks:



Two things,

1) For some reason my sides are flipped. Left is right, top left is top right etc. I did a simple hack to fix this but I really have no idea why this is so.

Here's my current code that gives the above image, you see I manually swapped my indexes for which hex direction I'm going to get them to line up.
code:
// E: AB - R
// F: ED - L
// A: EF - BL
// B: BC - TR
// C: CD - TL
// D: AF - BR

// Firstly, convert given x,y,z coordinates in worldspace to axial/cube coordinates.
float mSize = 1;

float column = ((HexOrigin.x * (float(sqrt(3)) / (3))) - (float(HexOrigin.z) / (3))) / mSize;
float row = (HexOrigin.z * (float(2) / (3))) / mSize;

// z because worldspace from the camera angle is X,Z instead of X,Y because reasons.

float3 HexOriginCubeCoords = float3(column,-column-row,row);

// now we round..

int rx = round(HexOriginCubeCoords.x);
int ry = round(HexOriginCubeCoords.y);
int rz = round(HexOriginCubeCoords.z);

float x_diff = abs(rx - HexOriginCubeCoords.x);
float y_diff = abs(ry - HexOriginCubeCoords.y);
float z_diff = abs(rz - HexOriginCubeCoords.z);

if (x_diff > y_diff && x_diff > z_diff)
{
    rx=-ry-rz;
}
else if (y_diff > z_diff)
{
    ry = -rx-rz;
}
else
{
	rz = -rx-ry;
}

int3 myHex_Cube = int3(rx,ry,rz);

// convert back to even-r
float q = myHex_Cube.x + ceil(float(myHex_Cube.z + (int(myHex_Cube.z) % 1)) / 2);
float r = myHex_Cube.z;

float2 mHexCoord = float2(q/(MapWidth - 1),1.0 - (-r/(MapHeight - 1)));

// We now know for sure "where" our tile is in cube coordinates.
// Now we need to find out which textures are being used in neigbhouring hexes.

int mTexases[7]; 

// ID:0 is reserved as "our" texture of the current hex.
// 1 to 6 go from top right clockwise to top left.
float3 _texLookupCol = tex2D(TexMap, mHexCoord).rgb;

// Main texture
mTexases[6] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

float2 directions_offset[2][6];

directions_offset[0][0] = float2(1,-1); // TR
directions_offset[0][1] = float2(1,0);  // R
directions_offset[0][2] = float2(1,1);  // BR
directions_offset[0][3] = float2(0,1);  // BL
directions_offset[0][4] = float2(-1,0); // L
directions_offset[0][5] = float2(0,-1); // TL

directions_offset[1][0] = float2(0,-1); // TR
directions_offset[1][1] = float2(1,0);  // R
directions_offset[1][2] = float2(0,1);  // BR
directions_offset[1][3] = float2(-1,+1);  // BL
directions_offset[1][4] = float2(-1,0); // L
directions_offset[1][5] = float2(-1,-1); // TL

// Top Right
int parity = int(r) & 1;
float2 direction = directions_offset[parity][5];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[0] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Right

direction = directions_offset[parity][4];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[1] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Bottom Right
direction = directions_offset[parity][3];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[2] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Bottom Left
direction = directions_offset[parity][2];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[3] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Left
direction = directions_offset[parity][1];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[4] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);

// Top Left
direction = directions_offset[parity][0];
mHexCoord = float2((q+direction.x)/(MapWidth - 1),1.0 - (-(r+direction.y)/(MapHeight - 1)));
_texLookupCol = tex2D(TexMap, mHexCoord).rgb;
mTexases[5] = (_texLookupCol.x * 4) + (_texLookupCol.y * 2) + (_texLookupCol.z * 1);
// Okay in theory we now have all of our surrounding textures.


float i,j;
float a[6];
int b[6];
float3 lookup[7];

float3 tempColor;
float iMin;
float iMax;
float temp = 0;
int intTemp = 0;
float interp;

// E: AB - R
// F: ED - L
// A: EF - BL
// B: BC - TR
// C: CD - TL
// D: AF - BR

a[0] = B; //
a[1] = E; //
a[2] = D; //
a[3] = A; //
a[4] = F; //
a[5] = C; //

// These are now the six textures the game currently 
// allows, will switch to texture atlas asap.
lookup[0] = tex2D(_Tex0, _UVs).rgb; //grass
lookup[1] = tex2D(_Tex1, _UVs).rgb; // 
lookup[2] = tex2D(_Tex2, _UVs).rgb;
lookup[3] = tex2D(_Tex3, _UVs).rgb; // clay
lookup[4] = tex2D(_Tex4, _UVs).rgb;
lookup[5] = tex2D(_Tex5, _UVs).rgb; // desert
lookup[6] = tex2D(_Tex6, _UVs).rgb; // forest

for (j=0;j<6;j++)
{
 iMin = j;
 for (i = j+1; i<6;i++)
 {
  if (a[i] < a[iMin]
)
  {
    iMin = i;
  }
 }

 if (iMin != j)
 {
  temp = a[j];
  a[j] = a[iMin];
  a[iMin] = temp;

  intTemp = mTexases[j];
  mTexases[j] = mTexases[iMin];
  mTexases[iMin] = intTemp;
 }
}

iMax = a[1];

float exp = 5.0;

float interp1 = pow(abs(a[0]-1),exp);
float interp2 = pow(abs(a[1]-1),exp);
float interp3 = pow(abs(a[2]-1),exp);

tempColor = (interp1*lookup[mTexases[0]] + 
	interp2*lookup[mTexases[1]] +
	interp3*lookup[mTexases[2]]) / (interp1 + interp2 + interp3);

float interp0 = pow(abs(a[0] - 1), 2.2);

return lerp(lookup[mTexases[6]], tempColor, interp0);
//return lookup[mTexases[6]];
2) The weird interpolation that doesn't very interpolate is probably because I haven't updated the code to the latest version you suggested, though I imagine a simple fix would be to establish a hierarchy of textures.

e You're later code implemented:


That looks amazing.

(The weird whiteness is just because I have a lovely water texture)

Raenir Salazar fucked around with this message at 03:54 on Sep 10, 2015

Adbot
ADBOT LOVES YOU

Raenir Salazar
Nov 5, 2010

College Slice

Joda posted:

Yay, glad you got it working. Sorry I haven't been very helpful for the last couple questions; a lot of it was Unity specific, and I've been to busy with school to find the time to understand your shader properly.

I probably didn't phrase them right but I don't think my questions were that Unity specific per se. :(

But yay! Much rejoicing! I thank my stars that I do have some little bit of the required problem solving skills required of a developer, once I decided to buckle down and ask "Okay, is there a pattern to why my shader isn't working?" and determined the flippedness was universal I could narrow down my trial and error fixes.

Raenir Salazar
Nov 5, 2010

College Slice
Oh god, for my class assignment I need to use Opengl/C++ and I did not miss this at all when using Unity. Having to manually compile my dependent libraries is a crock (to use the x64 libs anyways).

It works though! I just wish the guides were more clear than they currently are.

Raenir Salazar
Nov 5, 2010

College Slice
So I have an Opengl application that creates a circle and outputs it to the screen. I want to use an Orthographic projection and have the window resize and everything resize accordingly.

I seem to have the aspect ratio correct now after much pain, but when I maximize the window the object seems to migrate to the lower left corner.






code:
glm::mat4 RefreshMVP(float screen_width, float screen_height, int zoom)
{
	const float ar_origin = (float)WIDTH / (float)HEIGHT;
	const float ar_new = (screen_width / screen_height);

	float scale_w = (float)screen_width / (float)WIDTH;
	float scale_h = (float)screen_height / (float)HEIGHT;
	if (ar_new > ar_origin) 
	{
		scale_w = scale_h;
	}
	else 
	{
		scale_h = scale_w;
	}

	float margin_x = (screen_width - WIDTH * scale_w) / 2;
	float margin_y = (screen_height - HEIGHT * scale_h) / 2;
	glm::mat4 Projection = glm::ortho(-WIDTH / ar_origin * 0.5f, 
WIDTH / ar_origin * 0.5f, 
HEIGHT / ar_origin * 0.5f, -HEIGHT / ar_origin * 0.5f, -1.0f, 100.0f);
	/*
	glm::mat4 Orthographic = glm::ortho((float)-(screen_width / 2) * zoom, 
(float)(screen_width / 2) * zoom,
		(float)-(screen_height / 2) * zoom, (float)(screen_height / 2) * zoom, (float)0.1, (float)100);
		*/
	// Projection matrix : 45&#65392; Field of View, 4:3 ratio, display range
// : 0.1 unit <-> 100 units
	//Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
	// Or, for an ortho camera :
	//glm::mat4 Projection = glm::ortho(-10.0f,10.0f,-10.0f,10.0f,0.0f,100.0f); 
// In world coordinates

	// Camera matrix
	glm::mat4 View = glm::lookAt(
		glm::vec3(0, 0, 1), // Camera is at (0,0,1), in World Space
		glm::vec3(0, 0, 0), // and looks at the origin
		glm::vec3(0, 1, 0)  // Head is up (set to 0,-1,0 to look upside-down)
		);
	// Model matrix : an identity matrix (model will be at the origin)
	glm::mat4 Model = glm::mat4(1.0f);
	// Our ModelViewProjection : multiplication of our 3 matrices
	glm::mat4 MVP = Projection * View * Model; 
// Remember, matrix multiplication is the other way around

	return MVP;
}
Shader (vert and frag):
code:
#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;

out vec3 mVertPositionInModelSpace;

void main(){

	mVertPositionInModelSpace = vertexPosition_modelspace;
	// Output position of the vertex, in clip space : MVP * position
	gl_Position =  MVP * vec4(vertexPosition_modelspace,1);

    //gl_Position.xyz = vertexPosition_modelspace;
    //gl_Position.w = 1.0;

}

#version 330 core

// Ouput data
out vec3 color;
in vec3 mVertPositionInModelSpace;

void main()
{

	// Output color = red 
	//color = mVertPositionInModelSpace.xyz;
	color = vec3(1,0,0).xyz;

}
The code for recalculating the MVP matrix is a little complex because otherwise it stretches my object when I play with the window size.

Edit: I'm using modern opengl and GLFW.

code:
int main()
{
	int screen_width = WIDTH;
	int screen_height = HEIGHT;

	// Initialise GLFW
	if (!glfwInit())
	{
		fprintf(stderr, "Failed to initialize GLFW\n");
		return -1;
	}

	glfwWindowHint(GLFW_SAMPLES, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
	//glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

	// Open a window and create its OpenGL context
	window = glfwCreateWindow(screen_width, screen_height, "Tutorial", NULL, NULL);
	if (window == NULL) {
		fprintf(stderr, "Failed to open GLFW window. If you have an Intel GPU, they are 
not 3.3 compatible. Try the 2.1 version of the tutorials.\n");
		glfwTerminate();
		return -1;
	}
	glfwMakeContextCurrent(window);

	// Initialize GLEW
	glewExperimental = GL_TRUE;
	if (glewInit() != GLEW_OK) {
		fprintf(stderr, "Failed to initialize GLEW\n");
		return -1;
	}

	// Ensure we can capture the escape key being pressed below
	glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);

	// Dark blue background
	glClearColor(0.0f, 0.0f, 0.4f, 0.0f);

	// Here we do stuff.
	drawCircle();
	std::cout << vecCirclePts.size() << std::endl;

	GLuint VertexArrayID;
	glGenVertexArrays(1, &VertexArrayID);
	glBindVertexArray(VertexArrayID);

	// Create and compile our GLSL program from the shaders
	GLuint programID = LoadShaders("SimpleVertexShader.vert", "SimpleFragmentShader.frag");

	// Get a handle for our "MVP" uniform
	GLuint MatrixID = glGetUniformLocation(programID, "MVP");

	int zoom = 1;

	glm::mat4 MVP = RefreshMVP(screen_width, screen_height, zoom);
	static const GLfloat g_vertex_buffer_data[] = {
		-1.0f, -1.0f, 0.0f,
		1.0f, -1.0f, 0.0f,
		0.0f,  1.0f, 0.0f,
	};

	float* a = &vecCirclePts[0];
	GLuint vertexbuffer;
	glGenBuffers(1, &vertexbuffer);
	glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
	glBufferData(GL_ARRAY_BUFFER, vecCirclePts.size() * sizeof(float), a, GL_STATIC_DRAW);

	GLuint colorbuffer;
	glGenBuffers(1, &colorbuffer);
	glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
	glBufferData(GL_ARRAY_BUFFER, vecCirclePts.size() * sizeof(float), a, GL_STATIC_DRAW);
	int new_height = screen_height;
	int new_width = screen_width;

	do {
		//glfwGetWindowSize(window, &new_width, &new_height);
		MVP = RefreshMVP(new_width, new_height, zoom);
		glViewport(0, 0, screen_width, screen_height);
		double x, y;
		glfwGetCursorPos(window, &x, &y);
		

		std::cout << "x: " << x << ", y: " << y << std::endl;
		// Clear the screen
		glClear(GL_COLOR_BUFFER_BIT);

		// Use our shader
		glUseProgram(programID);

		// Send our transformation to the currently bound shader, 
		// in the "MVP" uniform
		glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);

		// 1rst attribute buffer : vertices
		glEnableVertexAttribArray(0);
		glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
		glVertexAttribPointer(
			0,                  
// attribute. No particular reason for 0, but must match the layout in the shader.
			3,                  // size
			GL_FLOAT,           // type
			GL_FALSE,           // normalized?
			0,                  // stride
			(void*)0            // array buffer offset
			);

		// 2nd attribute buffer : colors
		glEnableVertexAttribArray(1);
		glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
		glVertexAttribPointer(
			1,                                
// attribute. No particular reason for 1, but must match the layout in the shader.
			3,                                // size
			GL_FLOAT,                         // type
			GL_FALSE,                         // normalized?
			0,                                // stride
			(void*)0                          // array buffer offset
			);

		// Draw the triangle !
		glDrawArrays(GL_TRIANGLES, 0, vecCirclePts.size());

		glDisableVertexAttribArray(0);
		glDisableVertexAttribArray(1);

		// Swap buffers
		glfwSwapBuffers(window);
		glfwPollEvents();

	} // Check if the ESC key was pressed or the window was closed
	while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
		glfwWindowShouldClose(window) == 0);

	// Cleanup VBO and shader
	glDeleteBuffers(1, &vertexbuffer);
	glDeleteBuffers(1, &colorbuffer);
	glDeleteProgram(programID);
	glDeleteVertexArrays(1, &VertexArrayID);

	// Close OpenGL window and terminate GLFW
	glfwTerminate();

	//std::cout << "Press ENTER to continue...";
	//std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
	return 0;
}
Edit 2:

And soooooolved through some trial and error, turns out I needed to use glViewport which I decided to try after determining that my opengl tutorials that I was following didn't update the window and weren't using glViewport either so it seemed like a good attempt as any.

code:
glm::mat4 RefreshMVP(float screen_width, float screen_height, int zoom)
{
	glm::mat4 Projection = glm::ortho(-screen_width/2.0f, screen_width/ 2.0f, 
        -screen_height/ 2.0f, screen_height/ 2.0f, -1.0f, 100.0f);

	// Camera matrix
	glm::mat4 View = glm::lookAt(
		glm::vec3(0, 0, 1), // Camera is at (0,0,1), in World Space
		glm::vec3(0, 0, 0), // and looks at the origin
		glm::vec3(0, 1, 0)  // Head is up (set to 0,-1,0 to look upside-down)
		);
	// Model matrix : an identity matrix (model will be at the origin)
	glm::mat4 Model = glm::mat4(1.0f);
	// Our ModelViewProjection : multiplication of our 3 matrices
	glm::mat4 MVP = Projection * View * Model; // Remember, matrix multiplication is the other way around
	glViewport(0, 0, screen_width, screen_height);
	return MVP;
}
And this works without having to apply weird scaling factors! My code is now simpler! I'm not sure why this works but it does.

Raenir Salazar fucked around with this message at 03:43 on Oct 6, 2015

Raenir Salazar
Nov 5, 2010

College Slice

Suspicious Dish posted:

I don't see where you update the GL viewport or your MV matrix with the new window size.

It's updated in my code just not in what I pasted. :v:

code:
                glfwGetWindowSize(window, &new_width, &new_height);
		MVP = RefreshMVP(new_width, new_height, zoom);
I put glViewport(0, 0, screen_width, screen_height); in my MVP refresh function. I'm not sure where it should go as best practice but it seems to work being there.

Raenir Salazar
Nov 5, 2010

College Slice

Suspicious Dish posted:

You should call glViewport when the window changes size (or at the start of every frame, for simple applications).

For games with complex scenes would that still qualify as "simple"? Right now the size change is from clicking the maximize button or dragging the window. So I assume it's sufficient to call it at the beginning?

quote:

glViewport specifies how GL's clip space maps to "screen space". You can technically have multiple GL viewports per window (think AutoCAD or Maya with its embedded previews), so you have to specify this yourself. For most applications, it's simply the size of the window, though.

quote:

Once you have that set up, you can imagine a number line with the far left corner being -1, and the far right corner being +1, and similar for
up and down. This is known as "clip space". GL's main rendering works in this space.

This I understand.

quote:

In order to have a circle that's a certain size in screen space (e.g. "50px radius"), you have to set up the coordinates to convert from the screen space that you want, to the clip space that GL wants. You do this by setting up a matrix that transforms them to that clip space, using the screen space as an input.

This I also generally understand barring the occasional site that doesn't consistently use the correct terminology.

Though right now he's an interesting thing that's confusing me. When I create a cube at coordinates 0,0 to 1,1 (two triangles) and pass it to my shader before I apply any matrix transformations for projection it's about a quarter of my app in size. When I do apply a projection transformation (and suppose my viewport is now 800,600) the square is now really small (iirc, I'm at work so I don't have the app on me).

What's the ratio of "unit" (say radius length or length of a side) length to pixels? How would I do the Unity thing of having a square or a cube that's "one meter" in length/radius so I know how everything else should be scaled relative to it?

Raenir Salazar
Nov 5, 2010

College Slice


Yeah! Got a nice golden ratio distribution of these little red circles. (Each made with 360 triangles* to give the impression of a smooth circle, if anyone has a good solution for drawing a smooth circle in open gl I'm very interested).

The main success here is implementing a Render() function to call to draw my shapes with some simple positioning and the theoretical option to pass different attributes to the shader for each circle.

The next milestone is to actually get these moving in some illusion of newtonian frictionless vacuum physics and collisions.

*I use the VBO with an array, so I'm not using immediate mode or any other deprecated features, I draw the circle with its 1000+ verts once, make it a vector and then pass it off to my buffer/shader. The next thing I can do is also pass a triangle index for further optimization but really drawing an arbitrary number of lines/triangles to create the illusion of a circle seems like a very brute forcey method to me and there's gotta be a standard solution somewhere no?

Raenir Salazar
Nov 5, 2010

College Slice
Yeah pretty happy about this.

Basic opengl 2D physics thing. :)

Raenir Salazar fucked around with this message at 01:42 on Oct 16, 2015

Raenir Salazar
Nov 5, 2010

College Slice
I am using Assimp to load a Rigged Mesh into an OpenGL application; this application is meant to allow me to manually manipulate the bones and see it likewise transform the mesh in real time.

Everything works, except that I want to rotate the bones so that they rotate via the global Z axis facing the camera.

So that suppose a hand is facing mostly towards me, the hand rotates as though it's joint's Z axis was pointed towards the screen.

Right now I have this


result.

The bones rotate according to their local blender roll axis.

Code:

code:
glm::mat4 CurrentNodeTransform;
CopyaiMat(&this->getScene()->mRootNode->FindNode(boneName.c_str())->
mTransformation,
 CurrentNodeTransform);

glm::mat4 newRotation = glm::rotate(angle, glm::vec3(0.0, 0.0, 1.0));
 

CurrentNodeTransform = (CurrentNodeTransform * newRotation);
 
CopyGlMatToAiMat(CurrentNodeTransform,
 this->getScene()->mRootNode->FindNode(boneName.c_str())->mTransformation);  
  
I posted to the OpenGL/Vulkan forums and they suggested IIRC this change:

quote:

Obtain the transformation from bone space to view space (i.e. the view transformation multiplied by the global bone transformation). Invert it to get the transformation from view space to bone space. Transform the vector (0,0,1,0) by the inverse to get the bone-space rotation axis.

I am probably misunderstanding this but,

I now have this:
code:
glm::mat4 BoneToView = View * CurrentNodeTransform;
glm::mat4 BoneToViewInverse = glm::inverse(BoneToView);
glm::vec4 BoneSpaceRotationAxis = 
     BoneToViewInverse * glm::vec4(0, 0, 1, 0);
printf("Printing BoneSpaceRotationAxis");
PrintVector4(BoneSpaceRotationAxis);
glm::mat4 newRotation = 
     glm::rotate(angle, glm::normalize(glm::vec3(BoneSpaceRotationAxis)));

CurrentNodeTransform = CurrentNodeTransform * newRotation;  
But it still seems to rotate about their blender roll axis (just slightly differently).



Lines for the axis it rotates around, circle if it happens to be pointed at the camera.

CurrentNodeTransform is the transform of the bone relative to it's parent, I've tried cramming in there at different locations a version that had the local transform multiplied by it's parent transform but just made things weird.

:smith:

e: Edited a couple of times to not break the forum CSS.

Raenir Salazar fucked around with this message at 18:39 on Oct 27, 2016

Raenir Salazar
Nov 5, 2010

College Slice
Aaaaand I solved it, no idea how I was so close before but somehow fumbled the ball at the last second, I feel like I would've tried this permutation of multiplications and I just don't know how I missed it.

code:
glm::mat4 LocalNodeTransform;
glm::mat4 GlobalNodeTransform;
CopyaiMat(&this->getScene()->mRootNode->FindNode(boneName.c_str())
->mTransformation, LocalNodeTransform);
FindBone(boneName, this->getScene()->mRootNode, glm::mat4(1.0), GlobalNodeTransform);
//glm::mat4 newRotation = glm::rotate(angle, glm::vec3(axis.x, axis.y, axis.z));


glm::mat4 View = glm::lookAt(
    glm::vec3(0, 0, 1), // Camera in World Space
    glm::vec3(0, 0, 0), // and looks at 
    glm::vec3(0, 1, 0)  // Head is up (set to 0,-1,0 to look upside-down)
    );

// NEED GLOBAL ROTATION
//CurrentNodeTransform = newRotation * CurrentNodeTransform;
//CurrentNodeTransform = (CurrentNodeTransform * newRotation);

glm::mat4 BoneToView = GlobalNodeTransform * View;
glm::mat4 BoneToViewInverse = glm::inverse(BoneToView);
glm::vec4 BoneSpaceRotationAxis = BoneToViewInverse * glm::vec4(0, 0, 1, 0);
printf("Printing BoneSpaceRotationAxis");
PrintVector3(glm::normalize(BoneSpaceRotationAxis));
glm::mat4 newRotation = glm::rotate(angle, 
glm::normalize(glm::vec3(BoneSpaceRotationAxis)));

LocalNodeTransform = m_GlobalInverseTransform * 
newRotation * LocalNodeTransform; 
\\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^!!!!!!!

CopyGlMatToAiMat(LocalNodeTransform, this->getScene()->mRootNode->
FindNode(boneName.c_str())->mTransformation);
I had the general idea to separate out my global and local transforms but somehow whiffed and got my multiplication order mixed up at the last second; I had it right originally to just not with the Inverse BoneToView matrix there.
Second to last line needed to be:

code:
LocalNodeTransform = LocalNodeTransform * newRotation;
not:
code:
LocalNodeTransform = m_GlobalInverseTransform * newRotation * LocalNodeTransform;
Which was just me randomly trying things and got the order wrong.

Raenir Salazar fucked around with this message at 22:47 on Oct 27, 2016

Raenir Salazar
Nov 5, 2010

College Slice
So, for some reason,

code:
glm::rotate(glm::mat4(1.0), RotateX, glm::vec3(1, 0, 0))
Does not work, but:

code:
glm::rotate(glm::mat4(1.0), glm::radians(RotateX), glm::vec3(1, 0, 0))
Does.

The foremost code results in rotations in, approximately 30 degree increments for each unit of rotation. So Y: 1 results in my model rotating about 30 degrees.

And thus Y: 90 looks like it goes something like 15 to 30 degrees too far.

But glm:radians works.

This is completely at odds with the documentation for GLM as far as I can tell when it says "angleInDegrees" for the second parameter.

What gives?

Raenir Salazar
Nov 5, 2010

College Slice
That explains it! Google brings me 0.9.3~ish versions of the manual for some reason.

Raenir Salazar
Nov 5, 2010

College Slice
I managed to finish my final project! Here's a Demo video + Commentary.

Basically I made an OpenGL application that animates a rigged mesh using the Windows Kinect v2.

There are two outstanding issues:

1. Right now every frame is a keyframe when inserting. I don't really have it so that you can have a 30 second animation with say 3 Key frames where it interpolates. I'm seeing if I can fix it but I am getting some strange bad allocation memory errors when I try. On super simple lines of code too like aiVector3D* positionKeys = new AiVector3D[NEW_SIZE]; I don't get it, I'm investigating.

2. It only in theory works on any mesh, they have to share the same skeleton structure and names; and then their bones have to have some arbitrary orientation that matches the kinect but when I try to fix it so it matches it ruins my rigging on the pre-supplied blend files I found off of youtube from Sebastian Lague. I'd have to reskin the meshes to the fixed orientations which is a huge headache as I'm not 100% how the orientations have to be in Blender to make the Kinect happy.

quote:

- Bone direction(Y green) - always matches the skeleton.
- Normal(Z blue) - joint roll, perpendicular to the bone
- Binormal(X orange) - perpendicular to the bone and normal

Okay, so Y makes sense to me. Follows the length of the bone from joint to joint; I'm not sure if it's positive Y or negative Y but I hope it doesn't matter. In blender the default orientation following most tutorials is a positive Y orientation facing away from the parent bone.

Now "Normal" and "Binormal" don't make sense to me in any practical way. If the Bone is following my mesh's arm, is Z palm up or palm down? This is all I really care about and I don't see anything in my googling that implies what's correct. Using Blender's "Recalculate Bone Roll" with "Global Negative Y Axis" points the left arm Z's axis forward, and sometimes this gives good results?

I want my palm movement to match my palm orientation but it's hard to get this right because my mesh gets deformed editing my bones without rerigging it and it's hard to know up front if I'm right. :(

Raenir Salazar
Nov 5, 2010

College Slice

Absurd Alhazred posted:

I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone,



Would this be that? Y follows the bone. But then X and Z feels like it could be anything. I'm confused as can't the bone rotate on either the X or Z axis?

Raenir Salazar
Nov 5, 2010

College Slice

Absurd Alhazred posted:

Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it?

* Leap Motion calls this a phalange for internal consistency, so they can treat the thumb as a finger with a zero-length metacarpal. I don't know how the Kinect does it. See this diagram for the real-life naming convention, and this page for the convention Leap Motion uses.

IIRC the Kinect treats every bone the same way. It only represents one finger and the thumb though. MSDN

Though neither of my meshes have fingers iirc.

In Blender unless I have Inverse Kinematics I can rotate the bones however I want when animating; so if Z is the Bone Normal/Roll and X is the Binormal, how should the Bones be oriented in Blender with respect to their "Natural" plane of movement? Which brings me to:


roomforthetuna posted:

The fact that you can appear to bend your arm around either axis is really a facet of the Y axis rotation of the bone above it in the hierarchy; when you rotate *that* bone 90 degrees you effectively switch the directions of the X and Z axis of the lower bone (sign notwithstanding). The body does a pretty good job of hiding this, but with your elbow out try touching your fist to your chest, watch the joint, and try to then point your lower arm upwards without the joint rotating. It doesn't work.

I can see this but what about the shoulder though. It can rotate forwards (Holding my arms in front of me) and can rotate sideways, so that my arms point away from my body. Or is my collarbone doing the rotating here that causes that axis change?

Raenir Salazar
Nov 5, 2010

College Slice
I am struggling with this shader here:

Perlin Noise Shader

I'd like it so that regardless the dimensions of the plane it is on, it will properly display the generated noise, instead of stretching or squashing. Can I get this to use world coordinates instead of vertex coordinates?

Raenir Salazar
Nov 5, 2010

College Slice
Basically I just want to (in Unity) generate procedural Perlin/Simplex noise to a texture to use for real-time terrain generation (I tried using DOTS/Unity's Jobs system but 5k by 2k was an unacceptable 10 seconds).

In Unity you use Graphics.Blit, which at first didn't work very well but eventually some people were able to help me solve it.

Changing it from
code:
o.uv = v.vertex;
to:
code:
o.uv = v.texcoord - float4(0.5, 0.5, 0, 0);
Let Blit work to properly save it to a texture, and it lets me zoom in and out from the center of the plane.

To from there get it so that the displayed material on the plane doesn't squash/stretch when I change the scale of the plane to something that isn't square (1408, 1, 512) I did this:

code:
				o.uv = v.texcoord - float4(0.5, 0.5, 0, 0);
				o.uv.x = o.uv.x * _Scale.x;
				o.uv.y = o.uv.y * _Scale.y;
I was looking into "world space coordinates" because as far as I knew the points for the noise was determined by the object vertices, but with the above and subsequent changes the need for this change no longer is needed. Basically if I made a plane that's wider than a square I wanted the generated noise to "continue".

Where _Scale is something like 2.75, 1 in the case of 1408 by 512 to make sure the noise/material displayed to the plane isn't being stretched.

But now I want to be able to both zoom and pan (zoom is controlled by _Frequency), but I can't get that to work.

If I put the Offsets in inoise it pans and stays centered but has a weird parallax/cloud effect.
code:
			float inoise(float2 p)
			{
				p += _Offset;
				float2 P = fmod(floor(p), 256.0);	// FIND UNIT SQUARE THAT CONTAINS POINT
				p -= floor(p);                      // FIND RELATIVE X,Y OF POINT IN SQUARE.
				float2 f = fade(p);                 // COMPUTE FADE CURVES FOR EACH OF X,Y.

				P = P / 256.0;
				const float one = 1.0 / 256.0;

				// HASH COORDINATES OF THE 4 SQUARE CORNERS
				float A = perm(P.x) + P.y;
				float B = perm(P.x + one) + P.y;

				// AND ADD BLENDED RESULTS FROM 4 CORNERS OF SQUARE
				return lerp(lerp(grad(perm(A), p),
								   grad(perm(B), p + float2(-1, 0)), f.x),
							 lerp(grad(perm(A + one), p + float2(0, -1)),
								   grad(perm(B + one), p + float2(-1, -1)), f.x), f.y);

			}
i also get the same result doing it this way:
code:
			// fractal abs sum, range 0.0 - 1.0
			float turbulence(float2 p, int octaves)
			{
				float sum = 0;
				float freq = _Frequency, amp = 1.0;
				for (int i = 0; i < octaves; i++)
				{
					sum += abs(inoise(_Offset + p * freq))*amp;
					freq *= _Lacunarity;
					amp *= _Gain;
				}
				return sum;
			}
In Sebastian Lague's procedural landmass tutorial someone solved a similar parallax effect doing this:

code:
float sampleX = ((x - halfWidth) / scale * frequency) + (octaveOffsets[i].x / scale * frequency);
float sampleY = ((y - halfHeight) / scale * frequency) + (octaveOffsets[i].y / scale * frequency);
But no matter what I try I can't seem to solve the parallax without also breaking the zoom being centered.

I don't really need 3D or 4D noise because I'm likely just going to use a fall off map to insure the map is surrounded by water, so I don't really need it to be able to wrap around.

I can probably live with the parallax but if there's a simple change to fix it, I'd be greatful.

Adbot
ADBOT LOVES YOU

Raenir Salazar
Nov 5, 2010

College Slice

Xerophyte posted:

Right, OK. I'm kind of avoiding doing a deep dive into that particular simplex noise implementation, because it should be irrelevant to your problem and it seems very Unity-specific. I would again be deeply surprised if Unity does not have a better solution to your problem than "edit this shader" but I don't know Unity. Someone who does could probably give a better answer.

This said, the "parallax" you're talking about sounds like what happens when you scale or translate the different octaves in simplex noise independently. I think inoise is called for each octave in that implementation, so if you insert a fixed transform there then the transforms will only be correct for at most one octave of the noise and you get the effect that the different layers of the noise move independently.


For texture transforms, you're on the right track by the sounds of it. You can always do those without touching anything about the details of the texture generation or even knowing what type of texture you are using. You don't need or want to touch noise generation parameters like frequency or bandwidth to zoom or pan, any more than you'd need to edit an image-based texture in paint to zoom or pan.

If you have a texture coordinate p, and you want to translate it so the texture origin is centered on a point p0, you can do the transform p = p + p0.
If you want to zoom in by a factor of k, you can do the transform p = p * k.

Transforming the texture lookup coordinate prior to look-up works for any texture, procedural or image, of any dimensions. So just do that things wherever you're specifying the o.uv = v.texcoord - float4(0.5, 0.5, 0, 0); texture coordinate, I assume the vertex shader. You'd end up with something like:

code:
// Assuming we have the uniforms:
// uniform float2 _Scale;
// uniform float2 _Translate;
// uniform float2 _OutputSize;
// Your Uniforms May Vary

// Change the extents from [0,1] -> [-0.5,0.5] as before
o.uv = v.texcoord - float4(0.5, 0.5, 0, 0);

// Translate to center the texture on a new point
o.uv = o.uv + _Translate;

// Zoom in or out on the new center
o.uv = o.uv * _Scale;

// Correct for blitting to an image with a non-square aspect
float aspectRatio = _OutputSize.x / _OutputSize.y;
o.uv.x = o.uv.x * aspectRatio;
This will look different in your case, I don't know how Unity does texture transforms. As you noticed you can bake the aspect ratio into the _Scale parameter, for instance. In general a texture transform can be expressed with a homogeneous coordinates and a matrix like any other affine transform.


I suggested 3D noise because I figured you were texturing 3D data. I'm not sure what you mean by wrap-around, but it sounds unrelated to texture dimension. Since you're texturing a 2D image, use a 2D texture.

Thanks for this! I was able to gradually solve it with some help in the Unity discord, so for reference I will document the solution for your interest :) Unity doesn't really have any better noise generation solution that's real time. I tried using their noise generation function from their math library in DOTS (their multithreading/jobs API) and it was still 10 seconds or more for a 5k by 2k texture while this github project was real time on the shader. Basically it seemed like it was an unavoidable issue where the snoise(...) function was a very expensive operation no matter what when performed on the CPU with the in-built Unity one. Maybe one of those "fastnoise" libraries would've made it better but they use generators and would've required a way to re-implement it to work while multithreaded.

The solution in the end looks like this:

code:
Shader "Noise/IPN_FBM_2D"
{
	Properties{
		_Frequency("Frequency", float) = 1
		_Lacunarity("Lacunarity", float) = 2
		_Gain("Persistence", float) = 0.5
		_Scale("Scaling", Vector) = (1,1,0,0)
		_Offset("Offset", Vector) = (0,0,0,0)
	}
	SubShader
	{
		Pass
		{

			CGPROGRAM

			#pragma vertex vert
			#pragma fragment frag
			#pragma target 3.0
			#include "UnityCG.cginc"

			sampler2D _PermTable1D, _Gradient2D;
			float _Frequency, _Lacunarity, _Gain;
			float2 _Scale;
			float2 _Offset;

			struct v2f
			{
				float4 pos : SV_POSITION;
				float2 uv : TEXCOORD;
			};
[b]
			 v2f vert(appdata_base v)
			{
				v2f o;
				o.pos = UnityObjectToClipPos(v.vertex);
				o.uv = _Scale * (v.texcoord - 0.5) + _Offset / _Frequency;

				return o;
			}
[/b]
			float2 fade(float2 t)
			{
				return t * t * t * (t * (t * 6 - 15) + 10);
			}

			float perm(float x)
			{
				return tex2D(_PermTable1D, float2(x,0)).a;
			}

			float grad(float x, float2 p)
			{
				float2 g = tex2D(_Gradient2D, float2(x*8.0, 0)).rg *2.0 - 1.0;
				return dot(g, p);
			}

			float inoise(float2 p)
			{
				float2 P = fmod(floor(p), 256.0);	// FIND UNIT SQUARE THAT CONTAINS POINT
				p -= floor(p);                      // FIND RELATIVE X,Y OF POINT IN SQUARE.
				float2 f = fade(p);                 // COMPUTE FADE CURVES FOR EACH OF X,Y.

				P = P / 256.0;
				const float one = 1.0 / 256.0;

				// HASH COORDINATES OF THE 4 SQUARE CORNERS
				float A = perm(P.x) + P.y;
				float B = perm(P.x + one) + P.y;

				// AND ADD BLENDED RESULTS FROM 4 CORNERS OF SQUARE
				return lerp(lerp(grad(perm(A), p),
								   grad(perm(B), p + float2(-1, 0)), f.x),
							 lerp(grad(perm(A + one), p + float2(0, -1)),
								   grad(perm(B + one), p + float2(-1, -1)), f.x), f.y);

			}

			// fractal sum, range -1.0 - 1.0
			float fBm(float2 p, int octaves)
			{
				float freq = _Frequency, amp = 0.5;
				float sum = 0;
				for (int i = 0; i < octaves; i++)
				{
					sum += inoise(p * freq) * amp;
					freq *= _Lacunarity;
					amp *= _Gain;
				}
				return sum;
			}

			half4 frag(v2f i) : COLOR
			{
				float n = fBm(i.uv, 4);

				return half4(n,n,n,1);
			}

			ENDCG

		}
	}
	Fallback "VertexLit"
}
So looking at specifically the vertex shader:
code:
			 v2f vert(appdata_base v)
			{
				v2f o;
				o.pos = UnityObjectToClipPos(v.vertex);
				o.uv = _Scale * (v.texcoord - 0.5) + _Offset / _Frequency;

				return o;
			}
I don't really get why dividing by frequency fixes it, but it is very similar to the corrected code posted in Sebastian Lague's procedural terrain video:

code:
float sampleX = ((x - halfWidth) / scale * frequency) + (octaveOffsets[i].x / scale * frequency);
float sampleY = ((y - halfHeight) / scale * frequency) + (octaveOffsets[i].y / scale * frequency);
It took a bit of effort to explain the problem because of confusion over "scaling" as in correcting for aspect ratio and scaling as in zooming. Some solutions allowed to zoom the texture but frequence was still off-centered and so on until eventually we figured out the above solution.

code:
o.uv = _Scale * (v.texcoord - 0.5) + _Offset / _Frequency;
From there I also went and decided to make some shaders to apply a fall off map to the generated texture.



I figure it's much faster to blend textures on the GPU and re-Blit the result to a new texture than to either use multithreading to apply photoshop blends or just to run a loop. Not that running a loop would've been slow, as it was the noise function that made it intolerably slow.

Although I hated the streaks on the diagonals, so after discovering Unity's Shader Forge has a Rounded Rectangle node, spent some time through sheer trial and error reimplementing it as a custom node.

code:
float a = min(abs(r * 2), w);
float b = min(a, h);

r = max(b, 1e-5);
float2 uvs = abs(uv * 2 - 1) - float2(w, h) + f;
float d = length(max(0, uvs)) / v;
Out = d;
(where w,h is offset, v and f used to be radius but I split it up to try to have better control, but are probably aren't necessary given below).



I then plug the resulting value into another custom function node that takes the value to a power of a different a,b input to adjust the fall off map. Basically I wanted to have the "fall off" of the fall off map but in the shape of a rectangle, the "linear" based fall off map resulted in uglyness along the diagonals, if I knew how to do like, bicubic sampling or gaussian blur maybe that would've fixed it but it seemed easier to reimplement the rounded rectangle.

code:
Out = pow(Value, A) / (pow(Value, A) + pow(B - B * Value, A));
I then now do this, for each photoshop blend mode I'm interested in as my goal here is to layer a bunch of different noise maps together to try to get improved and more natural results (Multiply, Screen, Blend, etc):



Since by multiplying ridged noise and fbm noise will give like, mountainous looking islands with ridges and so on.

I've tested it and the results are Blit-able back to texture, so I'm pretty happy that I can get the results I want. I'm basically working on a procedural random world generator where I'd like to give the user/player the ability to create a world as they want.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply