Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Absurd Alhazred
Mar 27, 2010

by Athanatos

Raenir Salazar posted:

I managed to finish my final project! Here's a Demo video + Commentary.

Basically I made an OpenGL application that animates a rigged mesh using the Windows Kinect v2.

There are two outstanding issues:

1. Right now every frame is a keyframe when inserting. I don't really have it so that you can have a 30 second animation with say 3 Key frames where it interpolates. I'm seeing if I can fix it but I am getting some strange bad allocation memory errors when I try. On super simple lines of code too like aiVector3D* positionKeys = new AiVector3D[NEW_SIZE]; I don't get it, I'm investigating.

2. It only in theory works on any mesh, they have to share the same skeleton structure and names; and then their bones have to have some arbitrary orientation that matches the kinect but when I try to fix it so it matches it ruins my rigging on the pre-supplied blend files I found off of youtube from Sebastian Lague. I'd have to reskin the meshes to the fixed orientations which is a huge headache as I'm not 100% how the orientations have to be in Blender to make the Kinect happy.


Okay, so Y makes sense to me. Follows the length of the bone from joint to joint; I'm not sure if it's positive Y or negative Y but I hope it doesn't matter. In blender the default orientation following most tutorials is a positive Y orientation facing away from the parent bone.

Now "Normal" and "Binormal" don't make sense to me in any practical way. If the Bone is following my mesh's arm, is Z palm up or palm down? This is all I really care about and I don't see anything in my googling that implies what's correct. Using Blender's "Recalculate Bone Roll" with "Global Negative Y Axis" points the left arm Z's axis forward, and sometimes this gives good results?

I want my palm movement to match my palm orientation but it's hard to get this right because my mesh gets deformed editing my bones without rerigging it and it's hard to know up front if I'm right. :(

I don't know much about the Kinect, but I would have thought that "bone roll" (which I would have called "bone pitch") would be the vector the bone rotates around, so perpendicular to the plane of movement of the bone, and then the binormal is what you would get from (bone direction) x normal. Which should get a bit confusing with the thumb, which insists on being opposable in humans, and is also missing a joint.

As for exporting from Blender, what I've found is that it's basically impossible to closed-form resolve it, so just try out a few until you find the one that works, and then save that export configuration for future use.

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos

Raenir Salazar posted:



Would this be that? Y follows the bone. But then X and Z feels like it could be anything. I'm confused as can't the bone rotate on either the X or Z axis?

Other than the thumb's metacarpal*, which is all over the place, most bone joints have a natural movement plane, which is what I would think the Z would be perpendicular to. I don't know how they deal with the thumb: do they have any special allowances for it?

* Leap Motion calls this a phalange for internal consistency, so they can treat the thumb as a finger with a zero-length metacarpal. I don't know how the Kinect does it. See this diagram for the real-life naming convention, and this page for the convention Leap Motion uses.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Joda posted:

I want to batch all meshes that use the same shader into the same draw call, and for that I need all textures to be in the same array texture (I'm doing it this way because I'm expecting to have a LOT of visually distinct smaller objects, and for that to be efficient I would have to batch them.) I don't want to modify anything CPU-side, but new textures will have to go through the CPU side from the HDD before it's on the GPU. I want to stream the data asynchronously, because I don't want to stall the application while I'm loading in a new texture.

I don't want different draw calls in different threads. All GL calls will be in a single thread. I just want to stream texture data into an array texture without stalling the application. This isn't a problem with something like the vertex data, because array buffers can be mapped directly to a pointer and double buffering those would max eat an extra MB or so.

Have you tried only threading out the loading from HDD to CPU memory, rather than the texture calls?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Doc Block posted:

Is OpenGL on Windows still a toxic hellstew of driver updates breaking previously working OpenGL programs, vendor GLSL compiler implementations being broken in incompatible ways, one vendor's OpenGL implementation running your app just fine while another's crashes because you did things in an order it doesn't like, etc. etc. etc.?

If I stick to OpenGL 3.3 core profile will I be OK on Windows? The game this would be for isn't very demanding.

You should be fine, but be sure to use GLEW and GLFW to simplify getting started, running extensions, and working with Windows. On the whole it's not as "modern" as, say, DX11, but it's not too bad. I think drivers are way more stable these days, too.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Doc Block posted:

Another question: is it OK to fetch a GLSL uniform variable's location when the shader is linked and then save it (so I don't have to ask OpenGL for it every time) or should I ask every time in case something causes it to change (behind the scenes shader recompilation or whatever)?

Yeah, you shouldn't need to query it again unless you actively recompile/relink it.

Absurd Alhazred
Mar 27, 2010

by Athanatos

lord funk posted:

Yep, quaternions are great!

New question: I want to do a kind of hybrid ortho / projection display of 3D objects. I want to be able to project them, but then translate along the screen's 2D plane wherever I want them on the screen. Like this:



So the hex shapes there are orthographic projection, and I want the shapes to 'hover' over them.

How should I do this? I thought I could make a vertex shader that just takes the final position and translates the x/y coordinate, but that's still in 3D projected view space. I want to just shift the final projected image.

If you just translate them they're going to get projected towards the center of the screen rather than where you want them.

I think your best bet is to perspective render them to a texture once, and then write that texture orthographically in multiple places.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Xerophyte posted:

If you want an in-viewport translation with the perspective preserved then you can also just change the viewport transform. Render to texture makes sense if you intend to reuse the result in several places or over several frames.

Yeah, the problem with multiple viewports is that you have to change viewports per instance. Now, it's not really all that difficult to do that, most modern GPUs will both support it in the geometry shader and allow an extension to have it pushed to an option in the vertex shader instead, but from the use case it seems better to just render once, use many, and render to texture is the best way to do that, I think. It really depends on the use-case, you're right if the image needs to change every frame (although you could just render all frames to a sprite-sheet and then sample a different one each frame, assuming there is a finite set of sprites that will cover that for you).

Absurd Alhazred
Mar 27, 2010

by Athanatos

lord funk posted:

Yeah that makes total sense! Thanks for the approach details.

I do want to render the objects each frame, so they can react to environment lighting changes.

You could take care of that with a normal map, but yeah, maybe instancing with multiple viewports is the thing to do.

Absurd Alhazred
Mar 27, 2010

by Athanatos
I finally got some geometry shading going in our codebase. Not too difficult to do, but are there any nice best-practices guides? I know you're not suppose to multiply too many primitives, but it is standard to use it for points -> billboards, right?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Ralith posted:

This is massive overkill, and I wouldn't be shocked if it actually performed slower than, say, instancing.

So having most of the information in the instance variables and applying it to a single quad is better than having the same number of vertices and using a geomtry shader to expand them into quads?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Hubis posted:

Geometry shader billboards is bad
4 vertex instances is probably worse (depending on your ve count)

The best approach is to use 32-vertex instances made up of 8 quads, so your "billboard Id" will be "(InstanceID << 3) + (VertexId >> 2)" and your "billboard vertex id" will be "VertexId & 3" (on Nvidia -- can't speak for Intel/amd)

Any specific reason for that, or is it empirical?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Hubis posted:

Sorry, was phone-posting!

Geometry Shaders: The reason they're usually bad is because DirectX did not provide any relaxation to the "Rasterization Order" requirement -- the primitives must be rasterized downstream in the exact order in which they are generated (at least in circumstances where they would overlap). This can become a problem if you do expansion (or culling) in the GS because now each GS invocation has to serialize to make sure the outputs are written to a buffer for later rasterization in the right order. It might not be an issue if you're not actually geometry-limited, but it's generally something to be concerned about. Slow isn't useless though, and NVIDIA has come up with some cool ways to use the GS without invoking any major performance penalty (like Multi-Projection) but it has a bad rap in general.

Quad-Per-Instance: GPUs are wide processors. NVIDIA shader units essentially process 32 threads in parallel each instruction (and they have many such shader units). One quirk is that, at least on some iterations of the hardware, a given 32-thread "warp" can only process one instance at a time when executing vertex shaders. This means that if you have a 4-vertex instance then 4 threads are going to be enabled and 28 threads are going to be predicated off (essentially idle). Your vertex processing will be running at 12.5% efficiency! If you're doing particle rendering it might be that you're going to be pixel shader/blending rate bound before the vertex shading becomes an issue, but often you have enough vertices that it will bite you.

So if you use instancing (but have multiple quads/instance so you are using all 32 threads) then you avoid all these potholes.

Graphics is fun!

This is very informative. Thanks! :)

Absurd Alhazred
Mar 27, 2010

by Athanatos

Suspicious Dish posted:

hi thread why am i still dealing with row major / column major issues in 2018 can someone explain ok bye

It's extremely irritating, but you feel like a champ when you manage to reduce a bunch of operations involving several vectors into a handful of initializations and matrix multiplications.

One way to get a handle on it is to intentionally create glm::mat2x3 and operate with them on glm::mat2s and glm::mat3s, or initialize the same matrix through a list of floats and then and then through vectors, and then just look at the elements in a debugger, or have it print it out for you. The good news is that GLM is set up to more or less work the same way as matrix operations in GLSL, so the knowledge will transfer.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Suspicious Dish posted:

i assure you my frustation is not one simply of mental confusion. that post is only true in a pre-glsl world where the matrices are opaque to the programmer. in glsl, matrices *are* column-major (to say nothing of upload order!) and you can access e.g. the second column with mtx[1]. you can choose whether you treat your vector as row-major or column-major by using either v*mtx or mtx*v, respectively.

with the advent of ubo's, it's common for people to use row-major layouts for model/view matrices which are more memory-efficient (3x float4 rather than 4x float4), and use vec3(dot(v, m_row0), dot(v, m_row1), dot(v, m_row2)). you can even specify that this is your memory layout with the layout(row_major) flag in glsl. that does not change the *behavior* of matrices to be row-major, it just adds an implicit transpose upon access basically.

the issue comes when you try to mix this convention and mat4x4 in one shader. we have a compute shader that (among other things) calculates a projection matrix. it is cross-compiled from hlsl, and the hlsl to glsl compiler we have assumes a column-major convention (meaning it applies matrices as mtx*v). it also converts hlsl's row accessors (m[1][2]) into glsl's column accessors (m[2][1]), and flips _m12 field accessor order into _m21.

today i ripped all this out in favor of using row-major matrices everywhere (like d3d does) and having our cross-compiler flip multiplication order.

Wow. Never mind my sophomoric platitudes, then!

Absurd Alhazred
Mar 27, 2010

by Athanatos

KoRMaK posted:

I picked up a opengl shader tool and am trying to write a method that captures the screen. The library had this method in it, which looks ok from what I've googled. Problem is, that whenever I try to read the pixels all I get is black pixels back and not what is currently on screen.

All the stuff I've read seems to indicate that glReadPixels alone should do the job, which I tried but it still returned blankness. I'm bad at this, what am I doing wrong?



C++ code:
  bool GrabFrame( void * pPixelBuffer )
  {
    writeIndex = (writeIndex + 1) % 2;
    readIndex = (readIndex + 1) % 2;

    glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo[writeIndex]);
    glReadPixels(0, 0, nWidth, nHeight, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo[readIndex]);
    
    unsigned char * downsampleData = (unsigned char *)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
    if (downsampleData)
    {
      unsigned char * src = downsampleData;
      unsigned char * dst = (unsigned char*)pPixelBuffer + nWidth * (nHeight - 1) * sizeof(unsigned int);
      for (int i=0; i<nHeight; i++)
      {
        memcpy( dst, src, sizeof(unsigned int) * nWidth );
        src += sizeof(unsigned int) * nWidth;
        dst -= sizeof(unsigned int) * nWidth;
      }
      glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
    }
    glBindBuffer(GL_PIXEL_PACK_BUFFER, NULL);
    
    return true;
  }

First, just to keep things simpler, I would start with one pbo, even though it will be slower due to OpenGL having to wait on write before reading for each frame, and write the image from start of buffer to end, even though it will end up with an upside down image.

As written, the first frame you capture will be garbage, as you're reading before you write, which moving to one buffer will fix. Do you keep getting garbage from the second capture on? Also, are you sure that you have the right framebuffer bound?

Another general idea: check glGetError() after each OpenGL command. It might give you more insight into what's going wrong.

Absurd Alhazred
Mar 27, 2010

by Athanatos

mobby_6kl posted:

Is anyone here developing 3D stuff for Win Mixed Reality? I have a few simple native OpenGL apps that I was hoping to port over as a quick test and learning exercise and it seems like it should be possible... somehow, but there's gently caress all documentation on this and most samples seem to just use Unity.

WMR's native API is a mess. Unless they've changed things significantly, you need to have a managed Windows app just to access it directly. Your best bet is to use SteamVR, instead, which doesn't have the same issues. There's Windows Mixed Reality for Steam VR, they're all free as far as I know (once you have the WMR set).

Absurd Alhazred
Mar 27, 2010

by Athanatos

Ralith posted:

The SteamVR API is a disaster zone itself, actually, though maybe for different reasons. Did anybody not screw up their VR APIs?

At this point, I've put off future VR development until OpenXR comes out. I have a lot of respect for the Unity/Epic engineers who were faced with the task of wrapping SteamVR up into something approximately reliable.

I mean, at least SteamVR lets you just use it in C++ with OpenGL, DirectX, or Vulkan. What don't you like about it? I do have more experience with Oculus, I will admit.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Ralith posted:

Man, I have a whole list somewhere, but I'm not sure where I left it. The short story is that it's practically undocumented and rife with undefined behavior and myopically short-sighted design decisions. Yeah, it's nice that it isn't opinionated about the other tools you use, but that's just baseline sanity--a standard which it otherwise largely fails. Their own headers get their ABI totally wrong in places.

Might they have gotten better since you last used them? I just added a new feature to our code last week using a more recent addition to the API, and it was mostly painless.

Absurd Alhazred
Mar 27, 2010

by Athanatos

mobby_6kl posted:

OpenGL. I'm rendering a ton of pretty simple instances (tens to hundreds of thousands maybe, at least before culling) where each can have a different texture, though in practice there might be a few hundred. So far I'm testing with smaller numbers but everything works as it should with Texture Arrays, I just pass each instance the appropriate offset. Is this generally the right approach? I looked also into bindless textures but that seems rather more complicated and I'm not sure would be better.
I'd look into what limits there might be on elements of texture arrays in the hardware you're targeting, but it doesn't seem too bad. I would recommend using a texture atlas, though, if you can fit your images on such.

quote:

In the end I didn't even touch WMR, SteamVR is what I ended up using and it works perfectly fine with WMR as well. I guess there might be some weirdness in the API but I haven't had much trouble implementing VR in a new app. Granted I copy/pasted a lot of the sample code because the documentation seemed to be somewhat lacking in places, but still.

The only issue I'm actually having is with SteamVR Input, it all makes sense but just... doesn't work. The old API does though so that's good enough for now.

Yeah, I don't really see a future for the dedicated WMR API, thankfully. SteamVR can be a bit obtuse, but code samples can usually pull you through.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Suspicious Dish posted:

Texture arrays are superior to texture atlases in almost every way, if you can make them work. You get all of the benefits of an atlas without the filtering boundaries and other junk.

Yeah, that's fair. You'd have to profile to see if your target GPUs incur any performance cost for using one over the other.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Doc Block posted:

Yeah I think I just became confused because I read somewhere that the minimum number of descriptor sets that the standard requires implementations to support is only 4 or something, and I think even that is just how many can be actually bound at once.

Also I was thinking people were saying you could have an array of textures without having to have a descriptor for each one.

Anyway, got bindless textures to work and it was a lot less painful than I thought it’d be. Having to specify the size of the array in the shader is kind of a bummer though (my hardware doesn’t support the extension that lets you just do layout(whatever) uniform texture2D lotsaTextures[];, apparently).

It's been a while since I've played around with Vulkan, but wasn't there a way to pass on the equivalent of preprocessor definitions on link/compile/whatever they call when you add in a SPIRV object file as a shader to a pipeline? Some kind of compile-time constant or something?

Absurd Alhazred
Mar 27, 2010

by Athanatos
I hope this is the right thread for this: while I was reading about color transformations in GPU Gems I googled some dead links and found this evidence of some heated arguments about color coding and gamma correction from the late '90s. The internet always made you stupid.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Brownie posted:

The first problem is that when I perform the first draw, I noticed that if a fragment's stencil value should be updated by both the front AND back stencil test, I only see a value of 1 (corresponding to only one increment). That's okay for my purposes (since once either tests pass, the pixel is unmarked), but I just wanted to confirm that at I shouldn't expect both tests to mutate the stencil buffer.

Are you sure you've disabled backface culling, too?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Brownie posted:

Yeah, using RenderDoc I can see that the Cull Mode is set to NONE. If it was BACK or FRONT I'd also see artifacts in the final rendered image, but I don't, and manual inspect on the stencil shows that all the pixel I expect to be marked are marked, just some of them are only marked once instead of twice like I expect.

(As an aside: thank the lord for tools like RenderDoc)

I've not had a lot of experience with stencil tests, there's also something where you have to ask it to perform the test on the backface, right? Is that set?

Absurd Alhazred
Mar 27, 2010

by Athanatos

czg posted:

In an effort to learn stuff, I've been working on making a little vulkan renderer in my spare time.

I thought I had a pretty neat little setup, with two threads separately handling the actual rendering and updating of data, kinda like this:

I'm guessing this isn't a completely uncommon setup?

This actually works perfectly fine, and I get the expected results drawn, but the validator doesn't like it.
As soon as I update a descriptorSet for an image in the update thread, it says that the commandbuffer currently used in the render thread is invalidated:
code:
(null)(ERROR / SPEC): msgNum: 9 - You are adding vkQueueSubmit() to command buffer 0x140616d65b0 that is invalid because bound DescriptorSet 0x14 was destroyed or updated.
The spec valid usage text states 'Each of fence, semaphore, and swapchain that are valid handles must have been created, allocated, or retrieved from the same VkInstance'
([url]https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#VUID-VkAcquireNextImageInfoKHR-commonparent[/url])
    Objects: 1
       [0] 0x140616d65b0, type: 6, name: (null)
I looked around and it looked like if I used the extension VK_EXT_descriptor_indexing and created my descriptorSetLayouts with the flag VK_DESCRIPTOR_SET_LAYOUT_CREATE_UPDATE_AFTER_BIND_POOL_BIT_EXT, I should be better off, but no such luck.
Is there something else I need to flag to be able to update descriptors (in my case a texture array) while they're being used in a commandBuffer?
Or will I just have to pause my render thread before I update descriptors and resume it once the new command buffers are ready?

And as I wrote, this renders perfectly fine without a hitch or any glitchiness, it's just the validator complaining. Am I right to assume that I should always heed the validator, and if it works despite validation errors I'm just lucking out?

I'm writing this in c# using SharpVk in case that matters.

It's been a while since I've played with Vulkan, but don't you have to make sure a command buffer has been fully built and submitted (and perhaps even processed?) before changing any important state relevant to it, so you might need to double/triple buffer descriptor sets. That being said, is that what you usually update in Vulkan? I thought you'd be updating underlying data while descriptor sets were more static.

Absurd Alhazred
Mar 27, 2010

by Athanatos

czg posted:

Thanks for the hints.
Luckily implementing double buffered descriptors turned out to be pretty simple, and now everything works perfectly and the validator is happy!

:buddy:

Absurd Alhazred
Mar 27, 2010

by Athanatos

Odddzy posted:

I'm a 3d artist in a games company and would like to learn a bit more about programming HLSL or GLSL stuff to get more of the technical aspect of the job. Could I have some recommendations of books that could break me in to the subject?

I think your best bet would be to just get a book about one of the graphics APIs and learn from that, giving you the pipeline context where shaders fit and everything. I've found The OpenGL SuperBible pretty readable, moreso than the Programming Guide. I imagine DirectX has similarly usable books.

Absurd Alhazred
Mar 27, 2010

by Athanatos

OneEightHundred posted:

I'm working on porting a very old game to Windows. The game runs at a fixed 60Hz and changing the frame rate is not an option. Is there a way to determine the monitor refresh rate in windowed mode so I can skip or duplicate frames as necessary when the refresh rate isn't 60Hz? (I might use SyncInterval 0 instead, but I'm thinking that predictable skip/duplication rates is probably more consistent, and regardless of that, I still need a way of differentiating between 60Hz and anything else.)

Just use a timer and have the graphics loop sleep the rest of the frame, or if you want to push frames out all the time, have your physics loop do that instead.

Absurd Alhazred
Mar 27, 2010

by Athanatos

OneEightHundred posted:

That's basically what would happen in the SyncInterval 0 option. The only reason I'm even worrying about this is because of the potential pathological case at 60Hz where the present time is close to a frame boundary, in which case timing jitter could cause it to randomly overwrite the pending frame before it's been displayed, causing frame drops and duplication. I dunno if that's really much of an issue, but using SyncInterval 1 at 60Hz would avoid it since it would never overwrite a pending frame.

The problem is I'm not sure how to tell if the refresh rate is actually 60Hz when not in exclusive fullscreen mode.

If you're porting an old game to a modern computer I think you're way likelier to find yourself near the start of a frame with nothing to do than near the end of a frame. What I would do to avoid jitters is make sure I'm measuring towards the next frame, not 1/60 seconds from the start of frame. Have a frame counter you advance each frame, then multiply by 1.0/60.0 to get the current target end of frame. That way you won't be compounding errors on consecutive frames.

Another option, which should avoid you pushing a bunch of frames in succession if some other part of the program or another running program results in you skipping frames, is to measure time from start of run, divide by frame time and use floor to get the last frame which should have finished, then calculate your sleep time to get to the end of the current frame.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Rhusitaurion posted:

I have a question about geometry shaders.

I'm using them to generate 3D geometry from 4D geometry. For example:

https://imgur.com/zD9J15J

The way this works is I have a tetrahedral mesh that I send into the geometry shader as lines_adjacency (since it gives you 4 points at time - very convenient). There (and this is the sketchy part), I have a bunch of branchy code that determines if every tetrahedron intersects the view 3-plane, and emits somewhere between 0 and 6 (for the case where the whole tetrahedron is in-plane) vertices in a triangle strip.

It's a neat trick, but it seems sketchy. I'm no GPU wizard, but my understanding is that geometry shaders are slow, and branchy shaders are slow. Additionally they don't seem to be supported in WebGL, or Metal.

Is there any reasonable alternative for generating geometry that's dependent on transformed vertices? I could do this on the CPU, but I'd have to end up doing essentially all the vertex transforms there, which seems lovely. I could save a lot of work with some kind of BVH, but still. Compute shaders seem promising, but I think I'd have to send the transformed vertices back to the CPU to get the 4-to-many vertices thing.

What if you used a compute shader to conditionally send vertices over to one of two vertex buffers, and only draw one of them?

Absurd Alhazred
Mar 27, 2010

by Athanatos

Dominoes posted:

Ooh good point. So CAD is something more tied into real world, while Blender etc is more assets for games/films etc that never leave the computer? I am looking for the former. I want to build a modular aeroponics thing for plants. Could Blender still work for this?

I also kind of want to learn gfx-rs. I made a 4d rendering system with vulcano before, but that seems dead, and the docs were bad.

Yeah, Blender doesn't really have the real-world connection. Honestly I hate working with it but I think because 3D modeling isn't my thing, but I have to dabble in it for my job.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Xeom posted:

I've started to experiment a little with opengl for some 2D rendering. I've been coding my own math functions and such because hey its a fun hobby.
I got a little scene with some quads going, but I've run into a problem I can't seem to figure out.

I decided to use different Z depths to control what quad goes in front of the other, but they begin to shrink as they go away from the camera even though I'm using an orthographic matrix, which as far as I understand means that distance should not affect size. Yet they do shrink, and their location in relation to the x, y axis also changes. Its almost as if everything is being scaled towards the origin. Everything seems to work fine, until I play with the Z axis.

I do all my scaling and rotation in a 2x2 matrix, and then "promote" that matrix into a 4x4 matrix. Meaning I just copy the 2x2 values into the 4x4 identity matrix. Then I multiply that matrix by a 4x4 translation matrix. All my functions seem correct, and I'm following the second edition of 3D math for Graphics and games development for the math. Left handed convention and row ordered.

Here is some of the pertinent code.
https://pastebin.com/2CuKTUwW

Your "orthographic" projection looks like perspective to me.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Suspicious Dish posted:

row-major still works you just have to set the layout(row_major) flag in GLSL and hope that your GPU vendor didn't gently caress it up (they probably did)

But the translation is column major. It's best to stick to just one type instead of mixing them.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Suspicious Dish posted:

you don't have to mix them? I don't know what this means.

In the code that we were presented, the translation and "ortho" projection matrices were in opposite majority for the intended use. You should stick to a single majority instead of mixing them.

Absurd Alhazred
Mar 27, 2010

by Athanatos
I'm going to be honest, I always have to double-check myself whenever I'm editing any new matrix things in because I get confused and then different matrix libraries have inconsistent ways they handle the row of initializers.

Absurd Alhazred
Mar 27, 2010

by Athanatos

fankey posted:

I'm trying to write a simple HLSL 2d shader ( the end result is a WPF Effect if that matters ) that needs to deal with alpha and I am getting totally confused. This simple filter works fine and passes the color straight through
code:
sampler2D input: register(s0);

float4 main(float2 uv:TEXCOORD) : COLOR
{
  float4 color = tex2D(input, uv);
  return color;
}
The problem is that if I modify either the alpha or the color things get wonky so I'm obviously not understanding basic concepts. I can set rgba to fixed values but attempting to do any math on them gets weird. So this works to clear out red or to adjust the alpha
code:
sampler2D input: register(s0);

float4 main(float2 uv:TEXCOORD) : COLOR
{
  float4 color = tex2D(input, uv);
  color.rgb.r = 0;
  // also this works 
  color.a = color.a / 2;
  return color;
}
But if I try and invert a pixel things get weird. This code tries to just invert the blue channel but ends up also modifying the alpha.
code:
sampler2D input: register(s0);

float4 main(float2 uv:TEXCOORD) : COLOR
{
  float4 color = tex2D(input, uv);
  color.b = 1.0f - color.b;
  return color;
}
both
I tried saving the alpha and reapplying but it didn't make any difference.

Have you tried dividing the color by the alpha, inverting, then multiplying by the alpha? It might be pre-multiplied and get messed up if you don't account for it.

Absurd Alhazred
Mar 27, 2010

by Athanatos

giogadi posted:

If I’m planning to do cross-platform 3d graphics, is there any reason not to do it in Vulkan? I.e., is there any advantage to the alternative of manually doing a dx12 backend and a metal backend?

(Talking mostly theoretically here just to better understand. I’m probably gonna just stick with OpenGL for my actual work for the foreseeable future)

My impression is that structurally there isn't that much difference between the latest generation of APIs. That being said, you might still not be able to use Vulkan or might not have as much support for some low-end or specialized platforms like slightly older consoles and maybe certain smartphones. So you might end up having to create an abstraction layer with separate implementations anyway.

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos
I'd go further and say every API absolutely has bugs. Right now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply