|
I haven't tried this, but off the top of my head what if you passed a vertex stream with ptex style UVs - i.e 0 to 1 in both directions across the quad. Then you could just define a distance function to one of the edges and return that as your colour for anti-aliased lines, and clip when it's == 0. Maybe with some extra parameters if you want lines that are consistent pixel width. There might be a smarter way to do it that avoids an extra vertex stream, or at least generates it in the VS from less data. If you're using triangle lists and always have quad triangles back-to-back in that list, you could maybe generate it from vertex ID I think?
|
# ? Sep 14, 2016 10:36 |
|
|
# ? May 15, 2024 03:39 |
|
Yeah something like that should work, I'll have to try later. Since my lists are all quads the clever approach should probably work as well but I'll try the obvious one first...
|
# ? Sep 14, 2016 11:41 |
I feel like this is something you could use the geometry shader for maybe? I could see that becoming a problem if you're using triangle strip to draw the quad, but if not you can model your vertex data in such a way that the diagonal is the last vertex pair, and output a line strip for only the first two vertex pairs. E: Although, I guess doing it like this any assumptions about winding order goes out the window Joda fucked around with this message at 18:27 on Sep 14, 2016 |
|
# ? Sep 14, 2016 18:14 |
|
Can't you just draw lines around the quads after you've drawn the quads themselves? Like, draw the quads, then bind a shader that outputs a solid color, specify GL_LINES instead of GL_TRIANGLES or whatever for the second draw call along with an array that just has the quad corners, etc.
Doc Block fucked around with this message at 19:38 on Sep 14, 2016 |
# ? Sep 14, 2016 19:30 |
|
Doc Block posted:Can't you just draw lines around the quads after you've drawn the quads themselves? Like, draw the quads, then bind a shader that outputs a solid color, specify GL_LINES instead of GL_TRIANGLES or whatever for the second draw call along with an array that just has the quad corners, etc. I think that can have problems with z-fighting? I've actually done this before and then I did draw everything twice with combination of glPolygonOffset and glDepthRange to avoid z fighting but it would be nice to do it in a single draw call. Also I thought glPolygonOffset was deprecated but I guess according to https://www.opengl.org/sdk/docs/man/html/glPolygonOffset.xhtml it's in 4.5
|
# ? Sep 14, 2016 21:33 |
|
I guess I just assumed this was for something 2D, so you'd have the depth buffer turned off
|
# ? Sep 14, 2016 22:55 |
|
netcat posted:I think that can have problems with z-fighting? I've actually done this before and then I did draw everything twice with combination of glPolygonOffset and glDepthRange to avoid z fighting but it would be nice to do it in a single draw call. Also I thought glPolygonOffset was deprecated but I guess according to https://www.opengl.org/sdk/docs/man/html/glPolygonOffset.xhtml it's in 4.5 Yeah, when I did this, I used the evil option of gl_FragDepth = gl_FragCoord.z - 1e-6; in my fragment shader.
|
# ? Sep 14, 2016 23:13 |
|
Suspicious Dish posted:Yeah, when I did this, I used the evil option of gl_FragDepth = gl_FragCoord.z - 1e-6; in my fragment shader. If you set your depth testing to equal or less equal (or greater equal depending) and use the same matrices, it should generally work without z fighting. At least in my experience. Also a fixed function depth offset should disrupt the pipeline less right? Sex Bumbo fucked around with this message at 05:43 on Sep 15, 2016 |
# ? Sep 15, 2016 05:39 |
|
My recollection is that glPolygonOffset did nothing compared to the glFragDepth hack, but maybe I was just being an idiot.
|
# ? Sep 15, 2016 06:02 |
I'm working on some hacky way to make a sub-surface scattering-like effect where I basically just do a Gaussian over a texture that maps unto an object based on how its original texture maps, but I'm not sure how to deal with seams. I have an idea where I have a separate lookup texture that refers to wherever the seam links to in case the filter tries to look up values that go across a seam, and generating this texture is where I'm stuck. Is there an algorithm of some sort that is able to somehow identify both sides of a seam? Like say I give it a model with UVs and it returns a list of edges (index pair pairs) that constitute seams. I tried googling, but all I get is about seam removal from textures/models. Joda fucked around with this message at 00:20 on Oct 8, 2016 |
|
# ? Oct 8, 2016 00:12 |
|
I am using Assimp to load a Rigged Mesh into an OpenGL application; this application is meant to allow me to manually manipulate the bones and see it likewise transform the mesh in real time. Everything works, except that I want to rotate the bones so that they rotate via the global Z axis facing the camera. So that suppose a hand is facing mostly towards me, the hand rotates as though it's joint's Z axis was pointed towards the screen. Right now I have this result. The bones rotate according to their local blender roll axis. Code: code:
quote:Obtain the transformation from bone space to view space (i.e. the view transformation multiplied by the global bone transformation). Invert it to get the transformation from view space to bone space. Transform the vector (0,0,1,0) by the inverse to get the bone-space rotation axis. I am probably misunderstanding this but, I now have this: code:
Lines for the axis it rotates around, circle if it happens to be pointed at the camera. CurrentNodeTransform is the transform of the bone relative to it's parent, I've tried cramming in there at different locations a version that had the local transform multiplied by it's parent transform but just made things weird. e: Edited a couple of times to not break the forum CSS. Raenir Salazar fucked around with this message at 18:39 on Oct 27, 2016 |
# ? Oct 27, 2016 18:36 |
|
Aaaaand I solved it, no idea how I was so close before but somehow fumbled the ball at the last second, I feel like I would've tried this permutation of multiplications and I just don't know how I missed it.code:
Second to last line needed to be: code:
code:
Raenir Salazar fucked around with this message at 22:47 on Oct 27, 2016 |
# ? Oct 27, 2016 22:45 |
|
Any idea why my verts are off when trying to draw instanced cubes?code:
code:
My code is in Rust using gfx, and located here, but I'm pretty sure it's my shader code loving up here. e: it could be that my w-parameter (which is defined to be 1 for each vertex) is getting scaled. e2: Yep, that was it. Lesson learned, don't multiply the w-param when doing local scaling. gonadic io fucked around with this message at 15:10 on Nov 13, 2016 |
# ? Nov 13, 2016 13:22 |
|
So, for some reason, code:
code:
The foremost code results in rotations in, approximately 30 degree increments for each unit of rotation. So Y: 1 results in my model rotating about 30 degrees. And thus Y: 90 looks like it goes something like 15 to 30 degrees too far. But glm:radians works. This is completely at odds with the documentation for GLM as far as I can tell when it says "angleInDegrees" for the second parameter. What gives?
|
# ? Dec 11, 2016 13:37 |
|
Raenir Salazar posted:So, for some reason, Documentation for 0.9.8 says that the angle should be expressed in radians: http://glm.g-truc.net/0.9.8/api/a00169.html#ga161b1df124348f232d994ba7958e4815 From the GLM manual: "7.11. What unit for angles is used in GLM? GLSL is using radians but GLU is using degrees to express angles. This has caused GLM to use inconsistent units for angles. Starting with GLM 0.9.6, all GLM functions are using radians."
|
# ? Dec 11, 2016 14:29 |
|
That explains it! Google brings me 0.9.3~ish versions of the manual for some reason.
|
# ? Dec 11, 2016 14:35 |
|
Is there any particular reason why glDepthFunc accepts GL_NEVER and GL_ALWAYS? I can't think of any.
|
# ? Dec 11, 2016 18:27 |
|
Nehacoo posted:Is there any particular reason why glDepthFunc accepts GL_NEVER and GL_ALWAYS? I can't think of any. I could see a theoretical reason for pairing with stencil states with depth-pass/depth-fail behavior. Not sure why you'd ever WANT to do that, but that's not usually a good reason to restrict something from an API by itself.
|
# ? Dec 11, 2016 18:32 |
|
Hubis posted:I could see a theoretical reason for pairing with stencil states with depth-pass/depth-fail behavior. Hmm, I can't see it because I think that is beyond my current level of understanding of OpenGL (I haven't touched upon stencils at all) Hubis posted:Not sure why you'd ever WANT to do that, but that's not usually a good reason to restrict something from an API by itself. I get your point, but sometimes I wish OpenGL was a bit more restricted so that mortals could comprehend it. I understand that has more to do with legacy, though.
|
# ? Dec 11, 2016 18:39 |
|
Nehacoo posted:Hmm, I can't see it because I think that is beyond my current level of understanding of OpenGL (I haven't touched upon stencils at all) Yeah, OpenGL is basically the worst of all worlds in that sense: half the API is a relic to a style no one uses any more, the other half is a gross hack towards modern techniques that is inelegant because of the need for compatibility with legacy features, and none of it really matches the way hardware ACTUALLY works nowadays.
|
# ? Dec 11, 2016 19:56 |
|
I made an effort to learn 'modern' opengl4 not too long ago, and it seems to me like you should only use the minimal amount of actual gl calls in your code to load up your card, and then perform all of the transformations, coloring, masking, etc in shaders. Heck, I've been doing collision detection with the compute shaders; all I have to do is chuck all of my data structures at the graphics card, and fill it back in after every go. I don't know how it might work with es though. I think the main problem with the gl api is that, due to opengl's history, there are like a dozen ways to put a cube on the screen, some of which are not good, and it's difficult for beginners to pick a starting point for study. dougdrums fucked around with this message at 17:02 on Dec 12, 2016 |
# ? Dec 12, 2016 16:44 |
|
If you really want to learn a low level graphics API these days you should probably just go straight to Vulkan. It's big and complicated and the drivers aren't super stable yet, but at least it makes sense.
|
# ? Dec 14, 2016 00:25 |
|
I'm working on mesh processing, and I'm finding it hard to settle on an IO library / format. Dealing with textures is the most problematic. An example: I want a library that makes the following process easy. 1. Load an untextured mesh consisting of a vertex list and face list (no texture information) from file_0.ext 2. Assign texture coordinates to some of the vertices, referencing image 1. 3. Write file_1.ext 4. Assign texture coordinates to some of the vertices, referencing image 2. 5. Write file_2.ext 6. In a new binary, load file_0.ext, load file_1.ext and file_2.ext into memory. 7. Retrieve the texture coordinates and material properties (including image file path) for the vertex in file_1.ext and file_2.ext corresponding to vertex i in file_0.ext. I've messed with AssImp and OpenMesh so far, and it seems like their internal data structures do not reflect this sort of structure. AssImp treats a mesh that has several materials as several different meshes, each with a separate vertex list, which destroys the original structure of the index vector. Thus, there is no connection between the original index vector and that which is written in file_1.ext and file_2.ext. OpenMesh's support for texture coordinates seems to be completely broken. It seems like the Wavefront OBJ format is designed to handle situations like this. e.g. code:
The Gay Bean fucked around with this message at 02:06 on Dec 14, 2016 |
# ? Dec 14, 2016 01:07 |
|
Obj is the simplest but it doesn't contain animation info, so if that's something you may want to add later, you'll have to switch formats, or come up with one on your own. Fbx is pretty similar to the obj format but can hold animation info. There are a few parsers already written out there that you can use to build your arrays in main memory, and either pass them directly to your scene, or sort them based on how you want to represent/draw the triangles. elite_garbage_man fucked around with this message at 05:45 on Dec 14, 2016 |
# ? Dec 14, 2016 05:40 |
|
I managed to finish my final project! Here's a Demo video + Commentary. Basically I made an OpenGL application that animates a rigged mesh using the Windows Kinect v2. There are two outstanding issues: 1. Right now every frame is a keyframe when inserting. I don't really have it so that you can have a 30 second animation with say 3 Key frames where it interpolates. I'm seeing if I can fix it but I am getting some strange bad allocation memory errors when I try. On super simple lines of code too like aiVector3D* positionKeys = new AiVector3D[NEW_SIZE]; I don't get it, I'm investigating. 2. It only in theory works on any mesh, they have to share the same skeleton structure and names; and then their bones have to have some arbitrary orientation that matches the kinect but when I try to fix it so it matches it ruins my rigging on the pre-supplied blend files I found off of youtube from Sebastian Lague. I'd have to reskin the meshes to the fixed orientations which is a huge headache as I'm not 100% how the orientations have to be in Blender to make the Kinect happy. quote:- Bone direction(Y green) - always matches the skeleton. Okay, so Y makes sense to me. Follows the length of the bone from joint to joint; I'm not sure if it's positive Y or negative Y but I hope it doesn't matter. In blender the default orientation following most tutorials is a positive Y orientation facing away from the parent bone. Now "Normal" and "Binormal" don't make sense to me in any practical way. If the Bone is following my mesh's arm, is Z palm up or palm down? This is all I really care about and I don't see anything in my googling that implies what's correct. Using Blender's "Recalculate Bone Roll" with "Global Negative Y Axis" points the left arm Z's axis forward, and sometimes this gives good results? I want my palm movement to match my palm orientation but it's hard to get this right because my mesh gets deformed editing my bones without rerigging it and it's hard to know up front if I'm right.
|
# ? Dec 18, 2016 02:23 |
|
Raenir Salazar posted:I managed to finish my final project! Here's a Demo video + Commentary. I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone, and then the binormal is what you would get from (bone direction) x normal. Which should get a bit confusing with the thumb, which insists on being opposable in humans, and is also missing a joint. As for exporting from Blender, what I've found is that it's basically impossible to closed-form resolve it, so just try out a few until you find the one that works, and then save that export configuration for future use.
|
# ? Dec 18, 2016 02:53 |
|
Absurd Alhazred posted:I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone, Would this be that? Y follows the bone. But then X and Z feels like it could be anything. I'm confused as can't the bone rotate on either the X or Z axis?
|
# ? Dec 18, 2016 03:08 |
|
Raenir Salazar posted:
Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it? * Leap Motion calls this a phalange for internal consistency, so they can treat the thumb as a finger with a zero-length metacarpal. I don't know how the Kinect does it. See this diagram for the real-life naming convention, and this page for the convention Leap Motion uses.
|
# ? Dec 18, 2016 03:17 |
|
Raenir Salazar posted:
|
# ? Dec 18, 2016 03:22 |
|
Absurd Alhazred posted:Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it? IIRC the Kinect treats every bone the same way. It only represents one finger and the thumb though. MSDN Though neither of my meshes have fingers iirc. In Blender unless I have Inverse Kinematics I can rotate the bones however I want when animating; so if Z is the Bone Normal/Roll and X is the Binormal, how should the Bones be oriented in Blender with respect to their "Natural" plane of movement? Which brings me to: roomforthetuna posted:The fact that you can appear to bend your arm around either axis is really a facet of the Y axis rotation of the bone above it in the hierarchy; when you rotate *that* bone 90 degrees you effectively switch the directions of the X and Z axis of the lower bone (sign notwithstanding). The body does a pretty good job of hiding this, but with your elbow out try touching your fist to your chest, watch the joint, and try to then point your lower arm upwards without the joint rotating. It doesn't work. I can see this but what about the shoulder though. It can rotate forwards (Holding my arms in front of me) and can rotate sideways, so that my arms point away from my body. Or is my collarbone doing the rotating here that causes that axis change?
|
# ? Dec 18, 2016 03:50 |
Anyone know what the reasoning is behind not requiring RGB format textures other than 11_11_10 and 10_A2 to be renderable in the OpenGL standard? I've used RGB16/32F as render targets a couple of times in projects, and am really surprised to learn that vendors are not actually required to allow this. It just seems really arbitrary.
Joda fucked around with this message at 08:25 on Jan 7, 2017 |
|
# ? Jan 7, 2017 08:20 |
|
As a guess: because nobody supports non-power of two pixels in hardware.
|
# ? Jan 7, 2017 19:12 |
|
Joda posted:Anyone know what the reasoning is behind not requiring RGB format textures other than 11_11_10 and 10_A2 to be renderable in the OpenGL standard? I've used RGB16/32F as render targets a couple of times in projects, and am really surprised to learn that vendors are not actually required to allow this. It just seems really arbitrary. It's likely because you're looking for RGB, not RGBA, because: pseudorandom name posted:As a guess: because nobody supports non-power of two pixels in hardware. If you look at GL info, or even vulkan info, dumps for various GPUs you'll notice they really like 1, 2, or 4 component formats, and 3-component tends to be limited only to a few formats (like 8:8:8) that were historically common, or ones with varying bit counts that add up right (6:5:5, 11:11:10)
|
# ? Jan 7, 2017 20:02 |
|
Don't forget that the GPU driver may just be lying to you entirely about what features the hardware supports or what the actual format is that you're using. Here's the complete list of render target formats supported by Radeon Sea Island GPUs, for example: FORMAT Specifies the size of the color components and in some cases the number format. See the COMP_SWAP field below for mappings of RGBA (XYZW) shader pipe results to color component positions in the pixel format. When reading from the surface, missing components in the format will be substituted with the default value: 0.0 for RGB or 1.0 for alpha. POSSIBLE VALUES: 00 - COLOR_INVALID: this resource is disabled 01 - COLOR_8: norm, int, srgb 02 - COLOR_16: norm, int, float 03 - COLOR_8_8: norm, int, srgb 04 - COLOR_32: int, float 05 - COLOR_16_16: norm, int, float 06 - COLOR_10_11_11: float only 07 - COLOR_11_11_10: float only 08 - COLOR_10_10_10_2: norm, int 09 - COLOR_2_10_10_10: norm, int 10 - COLOR_8_8_8_8: norm, int, srgb 11 - COLOR_32_32: int, float 12 - COLOR_16_16_16_16: norm, int, float 14 - COLOR_32_32_32_32: int, float 16 - COLOR_5_6_5: norm only 17 - COLOR_1_5_5_5: norm only, 1-bit component is always unorm 18 - COLOR_5_5_5_1: norm only, 1-bit component is always unorm 19 - COLOR_4_4_4_4: norm only 20 - COLOR_8_24: unorm depth, uint stencil 21 - COLOR_24_8: unorm depth, uint stencil 22 - COLOR_X24_8_32_FLOAT: float depth, uint stencil NUMBER_TYPE Specifies the numeric type of the color components. POSSIBLE VALUES: 00 - NUMBER_UNORM: unsigned repeating fraction (urf): range [0..1], scale factor (2^n)-1 01 - NUMBER_SNORM: Microsoft-style signed rf: range [-1..1], scale factor (2^(n-1))-1 04 - NUMBER_UINT: zero-extended bit field, int in shader: not blendable or filterable 05 - NUMBER_SINT: sign-extended bit field, int in shader: not blendable or filterable 06 - NUMBER_SRGB: gamma corrected, range [0..1] (only supported for COLOR_8, COLOR_8_8 or COLOR_8_8_8_8 formats; always rounds color channels) 07 - NUMBER_FLOAT: floating point: 32-bit: IEEE float, SE8M23, bias 127, range (-2^129..2^129); 16-bit: Short float SE5M10, bias 15, range (-2^17..2^17); 11-bit: Packed float, E5M6 bias 15, range [0..2^17); 10-bit: Packed float, E5M5 bias 15, range [0..2^17)
|
# ? Jan 7, 2017 20:41 |
|
I have a really stupid math problem. I have an instanced 3D arrow that I need to rotate according to a vector (x,y,z) I pull out of a data texture in a GLSL shader. code:
|
# ? Feb 9, 2017 20:42 |
To get the axis to rotate around, you can cross the model direction that expresses the local model "pointing" of your arrow by the direction you loaded from the texture, then normalize. That is to say, if your arrow in model space is pointing is modelDir and you want it to point in someDir, then you do normalize(cross(modelDir,someDir)) (you may have to swap around the two vectors to get the right result, though.) Then in order to get the angle to rotate by you do acos(dot(modelDir,someDir)). E: Note, you may have to guard against the two directions being the same before normalizing, or you're can get some weird results on the normalization. A simple if(modelDir == someDir) {rotationMat = mat4();} should do. Joda fucked around with this message at 20:57 on Feb 9, 2017 |
|
# ? Feb 9, 2017 20:54 |
|
Holy christ thank you. I should've been committing more just so I could see where my stupid mistake was, because this looks a whole hell of a lot like what I had before. ( I may have missed the outer "normalize" on the axis calculation ... I think... )
|
# ? Feb 9, 2017 21:15 |
What's the best way to stream texture data to the GPU in a separate thread while also performing draw calls? I have all my texture data in array textures to avoid state changes as much as possible, and I won't be reading from a region that is currently being written to while the texture is being streamed. Apparently there's a thing called a pixel buffer object that I can bind to a pointer with glMapBuffer, but I can find very little documentation about it (at least in the way of direct explanations of what GL calls to use to stream the data from a pixel buffer to a region in a texture.) What about DXT compression for something like this?
|
|
# ? Feb 11, 2017 01:41 |
How fast is glCopyImageSubData()? Does it work with DXT compressed textures? I was thinking i could stream the texture into a "normal" 2D texture, then copy it over to the array texture when I'm done.
|
|
# ? Feb 11, 2017 22:06 |
|
|
# ? May 15, 2024 03:39 |
|
Joda posted:How fast is glCopyImageSubData()? Does it work with DXT compressed textures? I was thinking i could stream the texture into a "normal" 2D texture, then copy it over to the array texture when I'm done. Not sure why you're simultaneously trying to avoid state changes (usually done to avoid sending a lot of data to/from the gpu and keep things fast) while also trying to modify pixels on the cpu (which is the opposite, very incredibly slow and doesn't use the gpu's power at all). Might be an XY problem here. What are you attempting to do, and why can't you do it on the fast gpu instead of doing stuff on the slow cpu? You should be streaming all your texture data needed up front to the gpu and just binding different textures back and forth with the draw calls, perhaps rendering to texture with rendertargets/framebuffers if you need feedback loops of some kind (like, say, a post process). The closest thing you get to draw calls in different threads is building sub-command buffers asynchronously and combining them at some synchronous time in the future and submitting it to draw. Interested to know what your exact goal is!
|
# ? Feb 12, 2017 00:49 |