|
Raenir Salazar posted:I managed to finish my final project! Here's a Demo video + Commentary. I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone, and then the binormal is what you would get from (bone direction) x normal. Which should get a bit confusing with the thumb, which insists on being opposable in humans, and is also missing a joint. As for exporting from Blender, what I've found is that it's basically impossible to closed-form resolve it, so just try out a few until you find the one that works, and then save that export configuration for future use.
|
# ¿ Dec 18, 2016 02:53 |
|
|
# ¿ Apr 29, 2024 09:47 |
|
Raenir Salazar posted:
Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it? * Leap Motion calls this a phalange for internal consistency, so they can treat the thumb as a finger with a zero-length metacarpal. I don't know how the Kinect does it. See this diagram for the real-life naming convention, and this page for the convention Leap Motion uses.
|
# ¿ Dec 18, 2016 03:17 |
|
Joda posted:I want to batch all meshes that use the same shader into the same draw call, and for that I need all textures to be in the same array texture (I'm doing it this way because I'm expecting to have a LOT of visually distinct smaller objects, and for that to be efficient I would have to batch them.) I don't want to modify anything CPU-side, but new textures will have to go through the CPU side from the HDD before it's on the GPU. I want to stream the data asynchronously, because I don't want to stall the application while I'm loading in a new texture. Have you tried only threading out the loading from HDD to CPU memory, rather than the texture calls?
|
# ¿ Feb 12, 2017 01:13 |
|
Doc Block posted:Is OpenGL on Windows still a toxic hellstew of driver updates breaking previously working OpenGL programs, vendor GLSL compiler implementations being broken in incompatible ways, one vendor's OpenGL implementation running your app just fine while another's crashes because you did things in an order it doesn't like, etc. etc. etc.? You should be fine, but be sure to use GLEW and GLFW to simplify getting started, running extensions, and working with Windows. On the whole it's not as "modern" as, say, DX11, but it's not too bad. I think drivers are way more stable these days, too.
|
# ¿ Jul 4, 2017 01:14 |
|
Doc Block posted:Another question: is it OK to fetch a GLSL uniform variable's location when the shader is linked and then save it (so I don't have to ask OpenGL for it every time) or should I ask every time in case something causes it to change (behind the scenes shader recompilation or whatever)? Yeah, you shouldn't need to query it again unless you actively recompile/relink it.
|
# ¿ Jul 4, 2017 02:21 |
|
lord funk posted:Yep, quaternions are great! If you just translate them they're going to get projected towards the center of the screen rather than where you want them. I think your best bet is to perspective render them to a texture once, and then write that texture orthographically in multiple places.
|
# ¿ Mar 22, 2018 02:15 |
|
Xerophyte posted:If you want an in-viewport translation with the perspective preserved then you can also just change the viewport transform. Render to texture makes sense if you intend to reuse the result in several places or over several frames. Yeah, the problem with multiple viewports is that you have to change viewports per instance. Now, it's not really all that difficult to do that, most modern GPUs will both support it in the geometry shader and allow an extension to have it pushed to an option in the vertex shader instead, but from the use case it seems better to just render once, use many, and render to texture is the best way to do that, I think. It really depends on the use-case, you're right if the image needs to change every frame (although you could just render all frames to a sprite-sheet and then sample a different one each frame, assuming there is a finite set of sprites that will cover that for you).
|
# ¿ Mar 22, 2018 12:40 |
|
lord funk posted:Yeah that makes total sense! Thanks for the approach details. You could take care of that with a normal map, but yeah, maybe instancing with multiple viewports is the thing to do.
|
# ¿ Mar 22, 2018 14:20 |
|
I finally got some geometry shading going in our codebase. Not too difficult to do, but are there any nice best-practices guides? I know you're not suppose to multiply too many primitives, but it is standard to use it for points -> billboards, right?
|
# ¿ Mar 27, 2018 03:37 |
|
Ralith posted:This is massive overkill, and I wouldn't be shocked if it actually performed slower than, say, instancing. So having most of the information in the instance variables and applying it to a single quad is better than having the same number of vertices and using a geomtry shader to expand them into quads?
|
# ¿ Mar 27, 2018 05:23 |
|
Hubis posted:Geometry shader billboards is bad Any specific reason for that, or is it empirical?
|
# ¿ Mar 28, 2018 02:13 |
|
Hubis posted:Sorry, was phone-posting! This is very informative. Thanks!
|
# ¿ Mar 28, 2018 12:59 |
|
Suspicious Dish posted:hi thread why am i still dealing with row major / column major issues in 2018 can someone explain ok bye It's extremely irritating, but you feel like a champ when you manage to reduce a bunch of operations involving several vectors into a handful of initializations and matrix multiplications. One way to get a handle on it is to intentionally create glm::mat2x3 and operate with them on glm::mat2s and glm::mat3s, or initialize the same matrix through a list of floats and then and then through vectors, and then just look at the elements in a debugger, or have it print it out for you. The good news is that GLM is set up to more or less work the same way as matrix operations in GLSL, so the knowledge will transfer.
|
# ¿ Apr 28, 2018 01:45 |
|
Suspicious Dish posted:i assure you my frustation is not one simply of mental confusion. that post is only true in a pre-glsl world where the matrices are opaque to the programmer. in glsl, matrices *are* column-major (to say nothing of upload order!) and you can access e.g. the second column with mtx[1]. you can choose whether you treat your vector as row-major or column-major by using either v*mtx or mtx*v, respectively. Wow. Never mind my sophomoric platitudes, then!
|
# ¿ Apr 28, 2018 02:04 |
|
KoRMaK posted:I picked up a opengl shader tool and am trying to write a method that captures the screen. The library had this method in it, which looks ok from what I've googled. Problem is, that whenever I try to read the pixels all I get is black pixels back and not what is currently on screen. First, just to keep things simpler, I would start with one pbo, even though it will be slower due to OpenGL having to wait on write before reading for each frame, and write the image from start of buffer to end, even though it will end up with an upside down image. As written, the first frame you capture will be garbage, as you're reading before you write, which moving to one buffer will fix. Do you keep getting garbage from the second capture on? Also, are you sure that you have the right framebuffer bound? Another general idea: check glGetError() after each OpenGL command. It might give you more insight into what's going wrong.
|
# ¿ May 19, 2018 18:05 |
|
mobby_6kl posted:Is anyone here developing 3D stuff for Win Mixed Reality? I have a few simple native OpenGL apps that I was hoping to port over as a quick test and learning exercise and it seems like it should be possible... somehow, but there's gently caress all documentation on this and most samples seem to just use Unity. WMR's native API is a mess. Unless they've changed things significantly, you need to have a managed Windows app just to access it directly. Your best bet is to use SteamVR, instead, which doesn't have the same issues. There's Windows Mixed Reality for Steam VR, they're all free as far as I know (once you have the WMR set).
|
# ¿ Sep 26, 2018 00:37 |
|
Ralith posted:The SteamVR API is a disaster zone itself, actually, though maybe for different reasons. Did anybody not screw up their VR APIs? I mean, at least SteamVR lets you just use it in C++ with OpenGL, DirectX, or Vulkan. What don't you like about it? I do have more experience with Oculus, I will admit.
|
# ¿ Sep 30, 2018 08:11 |
|
Ralith posted:Man, I have a whole list somewhere, but I'm not sure where I left it. The short story is that it's practically undocumented and rife with undefined behavior and myopically short-sighted design decisions. Yeah, it's nice that it isn't opinionated about the other tools you use, but that's just baseline sanity--a standard which it otherwise largely fails. Their own headers get their ABI totally wrong in places. Might they have gotten better since you last used them? I just added a new feature to our code last week using a more recent addition to the API, and it was mostly painless.
|
# ¿ Oct 1, 2018 13:03 |
|
mobby_6kl posted:OpenGL. I'm rendering a ton of pretty simple instances (tens to hundreds of thousands maybe, at least before culling) where each can have a different texture, though in practice there might be a few hundred. So far I'm testing with smaller numbers but everything works as it should with Texture Arrays, I just pass each instance the appropriate offset. Is this generally the right approach? I looked also into bindless textures but that seems rather more complicated and I'm not sure would be better. quote:In the end I didn't even touch WMR, SteamVR is what I ended up using and it works perfectly fine with WMR as well. I guess there might be some weirdness in the API but I haven't had much trouble implementing VR in a new app. Granted I copy/pasted a lot of the sample code because the documentation seemed to be somewhat lacking in places, but still. Yeah, I don't really see a future for the dedicated WMR API, thankfully. SteamVR can be a bit obtuse, but code samples can usually pull you through.
|
# ¿ Jan 31, 2019 04:54 |
|
Suspicious Dish posted:Texture arrays are superior to texture atlases in almost every way, if you can make them work. You get all of the benefits of an atlas without the filtering boundaries and other junk. Yeah, that's fair. You'd have to profile to see if your target GPUs incur any performance cost for using one over the other.
|
# ¿ Jan 31, 2019 15:13 |
|
Doc Block posted:Yeah I think I just became confused because I read somewhere that the minimum number of descriptor sets that the standard requires implementations to support is only 4 or something, and I think even that is just how many can be actually bound at once. It's been a while since I've played around with Vulkan, but wasn't there a way to pass on the equivalent of preprocessor definitions on link/compile/whatever they call when you add in a SPIRV object file as a shader to a pipeline? Some kind of compile-time constant or something?
|
# ¿ May 27, 2019 21:40 |
|
I hope this is the right thread for this: while I was reading about color transformations in GPU Gems I googled some dead links and found this evidence of some heated arguments about color coding and gamma correction from the late '90s. The internet always made you stupid.
|
# ¿ May 29, 2019 00:02 |
|
Brownie posted:The first problem is that when I perform the first draw, I noticed that if a fragment's stencil value should be updated by both the front AND back stencil test, I only see a value of 1 (corresponding to only one increment). That's okay for my purposes (since once either tests pass, the pixel is unmarked), but I just wanted to confirm that at I shouldn't expect both tests to mutate the stencil buffer. Are you sure you've disabled backface culling, too?
|
# ¿ Jun 22, 2019 15:18 |
|
Brownie posted:Yeah, using RenderDoc I can see that the Cull Mode is set to NONE. If it was BACK or FRONT I'd also see artifacts in the final rendered image, but I don't, and manual inspect on the stencil shows that all the pixel I expect to be marked are marked, just some of them are only marked once instead of twice like I expect. I've not had a lot of experience with stencil tests, there's also something where you have to ask it to perform the test on the backface, right? Is that set?
|
# ¿ Jun 22, 2019 16:23 |
|
czg posted:In an effort to learn stuff, I've been working on making a little vulkan renderer in my spare time. It's been a while since I've played with Vulkan, but don't you have to make sure a command buffer has been fully built and submitted (and perhaps even processed?) before changing any important state relevant to it, so you might need to double/triple buffer descriptor sets. That being said, is that what you usually update in Vulkan? I thought you'd be updating underlying data while descriptor sets were more static.
|
# ¿ Aug 12, 2019 13:45 |
|
czg posted:Thanks for the hints.
|
# ¿ Aug 13, 2019 02:11 |
|
Odddzy posted:I'm a 3d artist in a games company and would like to learn a bit more about programming HLSL or GLSL stuff to get more of the technical aspect of the job. Could I have some recommendations of books that could break me in to the subject? I think your best bet would be to just get a book about one of the graphics APIs and learn from that, giving you the pipeline context where shaders fit and everything. I've found The OpenGL SuperBible pretty readable, moreso than the Programming Guide. I imagine DirectX has similarly usable books.
|
# ¿ Sep 26, 2019 05:12 |
|
OneEightHundred posted:I'm working on porting a very old game to Windows. The game runs at a fixed 60Hz and changing the frame rate is not an option. Is there a way to determine the monitor refresh rate in windowed mode so I can skip or duplicate frames as necessary when the refresh rate isn't 60Hz? (I might use SyncInterval 0 instead, but I'm thinking that predictable skip/duplication rates is probably more consistent, and regardless of that, I still need a way of differentiating between 60Hz and anything else.) Just use a timer and have the graphics loop sleep the rest of the frame, or if you want to push frames out all the time, have your physics loop do that instead.
|
# ¿ Oct 30, 2019 03:38 |
|
OneEightHundred posted:That's basically what would happen in the SyncInterval 0 option. The only reason I'm even worrying about this is because of the potential pathological case at 60Hz where the present time is close to a frame boundary, in which case timing jitter could cause it to randomly overwrite the pending frame before it's been displayed, causing frame drops and duplication. I dunno if that's really much of an issue, but using SyncInterval 1 at 60Hz would avoid it since it would never overwrite a pending frame. If you're porting an old game to a modern computer I think you're way likelier to find yourself near the start of a frame with nothing to do than near the end of a frame. What I would do to avoid jitters is make sure I'm measuring towards the next frame, not 1/60 seconds from the start of frame. Have a frame counter you advance each frame, then multiply by 1.0/60.0 to get the current target end of frame. That way you won't be compounding errors on consecutive frames. Another option, which should avoid you pushing a bunch of frames in succession if some other part of the program or another running program results in you skipping frames, is to measure time from start of run, divide by frame time and use floor to get the last frame which should have finished, then calculate your sleep time to get to the end of the current frame.
|
# ¿ Oct 30, 2019 14:30 |
|
Rhusitaurion posted:I have a question about geometry shaders. What if you used a compute shader to conditionally send vertices over to one of two vertex buffers, and only draw one of them?
|
# ¿ Nov 11, 2019 05:21 |
|
Dominoes posted:Ooh good point. So CAD is something more tied into real world, while Blender etc is more assets for games/films etc that never leave the computer? I am looking for the former. I want to build a modular aeroponics thing for plants. Could Blender still work for this? Yeah, Blender doesn't really have the real-world connection. Honestly I hate working with it but I think because 3D modeling isn't my thing, but I have to dabble in it for my job.
|
# ¿ Feb 2, 2020 05:39 |
|
Xeom posted:I've started to experiment a little with opengl for some 2D rendering. I've been coding my own math functions and such because hey its a fun hobby. Your "orthographic" projection looks like perspective to me.
|
# ¿ Aug 4, 2020 02:02 |
|
Suspicious Dish posted:row-major still works you just have to set the layout(row_major) flag in GLSL and hope that your GPU vendor didn't gently caress it up (they probably did) But the translation is column major. It's best to stick to just one type instead of mixing them.
|
# ¿ Aug 4, 2020 17:14 |
|
Suspicious Dish posted:you don't have to mix them? I don't know what this means. In the code that we were presented, the translation and "ortho" projection matrices were in opposite majority for the intended use. You should stick to a single majority instead of mixing them.
|
# ¿ Aug 4, 2020 17:46 |
|
I'm going to be honest, I always have to double-check myself whenever I'm editing any new matrix things in because I get confused and then different matrix libraries have inconsistent ways they handle the row of initializers.
|
# ¿ Aug 4, 2020 18:36 |
|
fankey posted:I'm trying to write a simple HLSL 2d shader ( the end result is a WPF Effect if that matters ) that needs to deal with alpha and I am getting totally confused. This simple filter works fine and passes the color straight through Have you tried dividing the color by the alpha, inverting, then multiplying by the alpha? It might be pre-multiplied and get messed up if you don't account for it.
|
# ¿ Jun 15, 2021 18:32 |
|
giogadi posted:If I’m planning to do cross-platform 3d graphics, is there any reason not to do it in Vulkan? I.e., is there any advantage to the alternative of manually doing a dx12 backend and a metal backend? My impression is that structurally there isn't that much difference between the latest generation of APIs. That being said, you might still not be able to use Vulkan or might not have as much support for some low-end or specialized platforms like slightly older consoles and maybe certain smartphones. So you might end up having to create an abstraction layer with separate implementations anyway.
|
# ¿ Aug 31, 2022 18:51 |
|
|
# ¿ Apr 29, 2024 09:47 |
|
I'd go further and say every API absolutely has bugs. Right now.
|
# ¿ May 6, 2023 00:37 |