Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dominoes
Sep 20, 2007

Hey dudes. Looking for 3d modelling software. (High-level; I don't think that's the point of this thread, but this seems like the best place) I downloaded Blender, and am about to dive into official tutorials. Is this the way to go? Seems like full-featured, well-designed and battle-tested, and is free, as opposed to the alternatives. Should I go with this, pirate something? (Not paying ~$2k/year or w/e), or buy something priced reasonably? Using it as a design aid for projects to build.

Although maybe this is an excuse to dork around in gfx-rs...

Dominoes fucked around with this message at 22:10 on Feb 1, 2020

Adbot
ADBOT LOVES YOU

haveblue
Aug 15, 2005



Toilet Rascal
What do you want to do? If you just want to be able to throw together programmer art for small projects, Blender is fine. If you want to make something releasable or develop marketable skills, you will have to graduate to industry-standard tools at some point.


e: I should have read further, Blender is probably perfect for "design aid".

haveblue fucked around with this message at 22:21 on Feb 1, 2020

lord funk
Feb 16, 2004

I say learn Blender. If (a long way) down the road you need to migrate to something else, you will be able to take your knowledge from Blender and translate it to the new app / UI. That way you will be starting from a place of experience and knowledge :eng101:

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I never really got into blender, but "design aid" (for what kind of project?) sounds like you want something more like a CAD program. Fusion 360 still has a no-cost licensing option last I checked. Or if you want to go open-source turbonerd like me, you can use OpenSCAD (FreeCAD too even).

Dominoes
Sep 20, 2007

Ooh good point. So CAD is something more tied into real world, while Blender etc is more assets for games/films etc that never leave the computer? I am looking for the former. I want to build a modular aeroponics thing for plants. Could Blender still work for this?

I also kind of want to learn gfx-rs. I made a 4d rendering system with vulcano before, but that seems dead, and the docs were bad.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Dominoes posted:

Ooh good point. So CAD is something more tied into real world, while Blender etc is more assets for games/films etc that never leave the computer? I am looking for the former. I want to build a modular aeroponics thing for plants. Could Blender still work for this?

I also kind of want to learn gfx-rs. I made a 4d rendering system with vulcano before, but that seems dead, and the docs were bad.

Yeah, Blender doesn't really have the real-world connection. Honestly I hate working with it but I think because 3D modeling isn't my thing, but I have to dabble in it for my job.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Dominoes posted:

Ooh good point. So CAD is something more tied into real world, while Blender etc is more assets for games/films etc that never leave the computer? I am looking for the former. I want to build a modular aeroponics thing for plants. Could Blender still work for this?

Caveat: I used to work for Autodesk and I wrote a decent chunk of the CPU/cloud rendering backend used by Fusion 360, so maybe I'm biased.

Blender and similar programs are like sculpting with clay, Fusion 360 and similar programs are like generating models from technical drawings. If you just want to do some visual prototyping without needing to actually build anything then Blender can work fine if you're familiar with it and have more of an arts background, but it's probably not the best of tools here. For actually designing a modular aeroponics thing a CAD program is going to be infinitely better, but it might be a little complex if you just want a sort of rough 3D sketch. I'd still suggest starting with Fusion 360 or Sketchup for this unless you're already comfortable with Blender.

This isn't really a great thread for either topic, though. For CAD stuff I don't think SA has a dedicated place but there's some discussion in DIY, in this case probably the 3D printing thread. They tend to spend a lot of time discussing servos and hot ends, but there are a bunch of CAD folks who can help you get started and answer questions better than the bunch of graphics coders who read this thread. For using Blender and similar there's the 3DCG thread in Creative Convention, which isn't really a help-with-CG thread either but can at least point you in good directions for that sort of thing.

Dominoes
Sep 20, 2007

Awesome; Diving into Fusion 360 right now!

Does anyone have any tutorial recommendations? The official one is not really doing a good job from a user's perspective; it's rapid-firing throughout the functionality without context.

Dominoes fucked around with this message at 17:02 on Feb 2, 2020

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Fusion 360 is a serious industry tool, you won't find too many hobbyist tutorials out there like you would for Photoshop or whatever. Brush up on your CAD basics, perhaps watch tutorials for some AutoCAD thing or whatever, or consider taking a fuller course.

Dominoes
Sep 20, 2007

Thank you. I'm going to press with the official tutorial, and or playing around. It has all the content needed to learn; feels like a reference in video form rather than a learning aid.

Ie, it's going over preferences, shortcuts, multiple ways to access tools before showing you how to make a box, so you have no context on what you'd use them for, and brain-dump them.

Dominoes fucked around with this message at 00:43 on Feb 4, 2020

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

As a general sort of workflow concept in CAD, you should try to get in the habit of defining as much as you can within 2D sketches for various profiles of your object(each sketch can be defined on different planes in 360), and then extruding, lofting, etc from that 2D sketch into 3D and doing any 3D operations last.
Like you can also just go at it with 3D primitives straight from the start, but things get messy that way for non-trivial designs.

I'd also recommend learning how parametric modeling modeling works in 360, it can be a big time saver for tweaking designs without completely re-doing them. Basically you can define parameters/variables (this side length, that angle, etc.) in the beginning, build your sketches using those variable names, then tweak parameters later with some quick edits of those values, and the whole design will re-adjust to match. You'll need a good grasp on constraints too, to get the most out of it.

And check out "Lars Christensen" on Youtube, he has a ton of content F360 content. A lot of it might be more advanced than you're ready for, but I think he has a beginner tutorial playlist too.

Dominoes
Sep 20, 2007

Thank you very much! Diving into learning about parametric modelling. I appreciate those guidelines/building blocks!

Dominoes
Sep 20, 2007

Love the parametric approach. It feels like how a programmer would 3d model. This tutorial is very nice. It walks you through how to build something, and in doing so lets you use various tools and features in a way where you see what you'd use them for.

Dominoes fucked around with this message at 00:39 on Feb 5, 2020

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Dominoes posted:

Love the parametric approach. It feels like how a programmer would 3d model. This tutorial is very nice. It walks you through how to build something, and in doing so lets you use various tools and features in a way where you see what you'd use them for.
I just briefly skimmed it and looks decent. Only thing I would add, which I feel is often overlooked, is that fillet #1 could have basically been done in the sketch too!

Dominoes
Sep 20, 2007

This is almost striking me as life skill I somehow missed category. The official tutorials are so bad though... It feels like they were designed in a meeting, and the approach was never tested on new users, ie the target audience.

MrMoo
Sep 14, 2000

So I’m actually working with the NBA now and have Maya files of a basketball and an animation sequence that I need to pump into THREE.js, and the least awful approach appears to be GLTF/.glb. However Maya of course does not export directly to that format still and I’m not getting great feedback exporting to FBX : “converting to that wont give us a high enough quality image.”

I do have an FBX of a single 🏀 and Facebook’s FBX2GLTF tool manages to lose all the textures so I end up with a black sphere, not so useful.

Can anyone explain the quality image comment? What’s the best way forward as I see many other tools as options to try for conversion via Babylonjs or Verge3d.

I have a target environment setup awaiting the .glb file, with a placeholder animation instead.

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.
Who's giving that feedback, because it seems fairly nonsensical. FBX is just a 3d interchange format, it doesn't have some kind of inherent quality (or lack thereof).

MrMoo
Sep 14, 2000

The artist who originally made the graphics, I'm not sure whether they were new to Maya just for this project though.

So kind of safer to try to find someone else who has access to Maya to run an export for me.

Here is a screenshot of the graphics:



And it replaces the diorama here:


MrMoo fucked around with this message at 02:35 on Feb 16, 2020

MrMoo
Sep 14, 2000

So, I have Maya 2020 up and a file open,



Fixing the textures and it appears to roughly render not horribly,



But cannot export the animation, I just end up with a boring empty scene,

MrMoo
Sep 14, 2000

Apparently I need to convert nParticle’s to geometry’s and then stuff can get exported. It looks like Maya 2020 is buggy as hell though. I’ve used a random third party tool and it forgot to process UVs and looks a bit surreal.

MrMoo fucked around with this message at 00:52 on Feb 19, 2020

MrMoo
Sep 14, 2000

Maya on a MacBook small screen randomly crashes the design screen, plug in a large monitor and it works fine. This is awesome software.



Looks like a major bug broke Babylon JS integration and fix won't ship till spring.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I've found like 5 pages talking about how great flip model borderless windowed is and can't find anything answering the actual thing I want to know about it:

How do you actually put a game in borderless window fullscreen mode? It looks like SetFullscreenMode is for exclusive fullscreen, but the documentation isn't consistent in the terminology it uses for fullscreen modes.


Also, is there any info on how DXGI handles Alt+Enter under the hood? I've wound up in some weird situation where hitting Alt+Enter when the game is in windowed mode causes it to maximize the window to take up the whole monitor and then slowly expand it horizontally until it takes up all of the second monitor, and then only renders to the first monitor. Something is obviously really screwy, and I'd rather just intercept Alt+Enter and handle it myself unless there's some good reason not to, but I'd like to know what it's trying to do so I can maybe just get it to cooperate instead.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
You're correct that "borderless window fullscreen" doesn't involve the exclusive fullscreen APIs.

Instead, all you need to do is make a regular window, without any window chrome, positioned and sized to exactly cover the monitor.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
You also need to be using one of the new flip DXGI modes

Brownie
Jul 21, 2007
The Croatian Sensation
I'm trying to work with some skinned FBX models exported from Maya and I noticed that vertex normal data matches the bind pose, while the position data does not. What is the reasoning behind this? Do I just have to apply the bind shape matrix to the positions (and not the normals) before using the model the way I expect? Or am I missing something completely?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
For smooth skinning, I thought everything was stored in bind pose. For single-bone models, it might store things in bone-local space, but from what I can tell, our code just seems to treat everything as if it's bind pose.

Are you sure you're using FbxSkin/FbxCluster and interpreting the mapping mode correctly? FBX SDK can be confusing, and has poor documentation.

Brownie
Jul 21, 2007
The Croatian Sensation
It'd make a lot more sense if everything was pre-transformed into the bind pose, or nothing was, but it's not what I'm seeing. Unfortunately I'm working with the model in WebGL, so I'm relying on Three.js's loader and do not really have access to the SDK. But loading the model into Blender shows the same thing: the mesh's vertex positions and normals do not match when ignoring the skeleton + pose, and when using the skeleton and default pose, the mesh has visible incorrect normals (because it is effectively applying the same transformation twice).

Not sure if both Blender and Three.js are doing this wrong or if the export isn't being performed properly? I have never used Maya or handled FBXs before so I'm a bit lost as to how I can validate that the model data is incorrect vs the implementation of these loaders.

xgalaxy
Jan 27, 2004
i write code
This is why I can’t wait for FBX to die off.
If you can... change to GLTF format.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Brownie posted:

It'd make a lot more sense if everything was pre-transformed into the bind pose, or nothing was, but it's not what I'm seeing. Unfortunately I'm working with the model in WebGL, so I'm relying on Three.js's loader and do not really have access to the SDK. But loading the model into Blender shows the same thing: the mesh's vertex positions and normals do not match when ignoring the skeleton + pose, and when using the skeleton and default pose, the mesh has visible incorrect normals (because it is effectively applying the same transformation twice).

Not sure if both Blender and Three.js are doing this wrong or if the export isn't being performed properly? I have never used Maya or handled FBXs before so I'm a bit lost as to how I can validate that the model data is incorrect vs the implementation of these loaders.

Don't load FBXs on the client. It's not meant to be a redistributable format. Write your own exporter using the SDK. Community reimplementations can be wrong.

xgalaxy posted:

This is why I can't wait for FBX to die off.
If you can... change to GLTF format.

glTF is very poor as an intermediate format. But definitely load it on the client if you can.

Suspicious Dish fucked around with this message at 16:16 on Apr 30, 2020

Brownie
Jul 21, 2007
The Croatian Sensation

Suspicious Dish posted:

Don't load FBXs on the client. It's not meant to be a redistributable format. Write your own exporter using the SDK. Community reimplementations can be wrong.

glTF is very poor as an intermediate format. But definitely load it on the client if you can.

Really wish I had any say in whether we allowed FBXs but unfortunately it's part of our "offering" that users can upload and use FBXs to populate their scenes.

I might just use the FBX SDK to dump the mesh data to an OBJ file and use that to validate my suspicion that the mesh is being exported incorrectly.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Let me ask you a different question: if you have a skinned mesh (e.g. more than one bone influence per vertex), what space could it be in other than bind-pose space? You could store it arbitrarily in the space of one of the bones, but bind-pose space would be easier and more efficient.

I'm not sure what space it could be in, *other* than bind-pose. Obviously, for normals, if the bone is just translation, then there's no difference between bind-pose and non-bind-pose transforms.

Brownie
Jul 21, 2007
The Croatian Sensation

Suspicious Dish posted:

Let me ask you a different question: if you have a skinned mesh (e.g. more than one bone influence per vertex), what space could it be in other than bind-pose space? You could store it arbitrarily in the space of one of the bones, but bind-pose space would be easier and more efficient.

I'm not sure what space it could be in, *other* than bind-pose. Obviously, for normals, if the bone is just translation, then there's no difference between bind-pose and non-bind-pose transforms.

Yeah you're right, so it looks like my assumption was wrong. I got another export from the client and it looks like the normals match model's position in the first frame of animation! So the positions are correctly in bind-pose space, but the normals aren't (the reverse of what I believed earlier). The earlier model had an animation that started in a T-pose that was slightly different from the bind pose, which is why i was so confused.

Rhusitaurion
Sep 16, 2003

One never knows, do one?

Hubis posted:

attempt at a Grand Unified Geometry Pipleine to fix both Geometry Shaders and Tessellation, but are still not broad platform. For what you need to do I would concur with the suggestion of using a computer shader that reads a vertex array as input and produces an index buffer as output. For optimal performance you might have to be creative by creating a local index buffer in shared memory and then appending it to your output IB as a single block (to preserve vertex reuse). Basically, have a shared memory array that is the size of your maximum possible triangles per dispatch, then compute your actual triangles into there and use atomic increment on a global counter to fetch and then increment a write offset into the output array by that amount. You are effectively reimplementing the GS behavior, but completely relaxing the order dependency.

Many months later, I actually ended up doing something like this, using Vulkan, but I'm wondering if there's a better way than what I've done.

For each object, I allocate one large buffer that will contain the input vertices, space for computed vertices/indices, an indirect draw struct, and another SSBO with a vertex counter. Then, on each frame for each object
1. In one command buffer (not per-object), reset the vertex and index counters to 0 with fill commands
2. In another command buffer, dispatch the compute shader. It operates on the input vertex buffer and atomically increments the vertex and index count SSBOs to get indices of the output buffers to which to write vertices and indices.
3. In another command buffer, do an indirect draw call.

Then I submit the 3 command buffers with semaphores to make sure that they execute in the order above. The first submission also depends on a semaphore that the draw submission from the last frame triggers.

This seems to work fine (except when I hard lock my GPU with compute shader buffer indexing bugs), but I'm wondering if I'm doing anything obviously stupid. I could double-buffer the computed buffers, perhaps, but I'm not sure if it's worth the hassle. I thought about using events instead of semaphores, but 1. not sure if it's wise to use an event per rendered object and 2. can't use events across queues, and compute queue is not necessarily the same as graphics queue.

Thoughts?

Rhusitaurion fucked around with this message at 00:16 on May 8, 2020

cultureulterior
Jan 27, 2004
I'm trying to create the directions to the triangle edges out of ddx and ddy (In order to use in fragment normals). I have the barycentric coordinates of the the triangle, and intend to transform from them to a set of vectors and then from screen matrix to the tangent matrix.

Does this make sense?

Am I forgetting some other obvious way of doing this?

The purpose of this is to make a shader that give triangles normal bevelled edges

Xerophyte
Mar 17, 2008

This space intentionally left blank
That should work but I think you can do it directly without detouring through screenspace.

Barycentric coordinate values are inherently a measure of the distance from an interior point to the edge associated with that coordinate. Assuming you already have some shading normal that you'd like to bevel towards then you should able to get a bevel effect by interpolating between the triangle's face normal and that shading normal by just using the min of the barycentric coordinate values, which is then a measure of how close you are to a triangle edge.

I.e. something like
C++ code:
// Interpolate between face_normal and shading_normal, such that face_normal is used in
// most of the triangle interior and the shading_normal is used close to the triangle edges. 
float3 bevelNormal(float3 barycentric_pos,
                   float3 face_normal,
                   float3 shading_normal) {
  const float BEVEL_DISTANCE = 0.1;

  float  edge_normal_weight = 1.0 - saturate(min(barycentric_pos / BEVEL_DISTANCE));
  float3 normal             = slerp(face_normal, shading_normal, edge_normal_weight);
  
  return normal;
}
Completely drycoded and I still write weird CPU path tracers rather than do raster graphics so I don't really know what functions and data your typical fragment shader can access.

This might be enough if your triangles are friendly, equilateral-ish and evenly sized, but I expect using the barycentrics directly like this will probably look off since the interpolation doesn't take real-world scale or triangle shape into account. You can convert from barycentric to real world edge distances to solve that, but you need to know some more properties of the triangle like side lengths and triangle area to do the conversion.

I don't really remember all conversion math for taking "standard" areal barycentrics (where the coordinates sum to 1) and changing them to trilinear coordinates, homogeneous barycentrics (where the coordinates are the actual subtriangle areas), or edge distances. I did some reading and I think you'd end up with something like
C++ code:
// Convert from "standard" sum-to-1 areal barycentric coordinates to distances to the
// associated triangle edges. Side lengths and triangle area should be in world units.
float3 arealBarycentricToEdgeDistance(float3 barycentric_pos,
                                      float3 triangle_side_lengths,
                                      float  triangle_area) {
  float  normalization_factor = 2.0 * triangle_area / sum(barycentric_pos);
  float3 edge_distances = normalization_factor * barycentric_pos * triangle_side_lengths;

  return edge_distances;
}
to do the conversion. Again, completely drycoded and untested.

Not sure if this is helpful. There might be a simpler way to get from barycentrics to edge distances, I'm not super familiar with the conversion math and I don't really know what data you're working with or what's available in your typical modern fragment shader.


[Late edit:] I did the derivation on paper and the above should be correct but, perhaps obviously, you don't have to include the "/ sum(barycentric_pos)" factor in the normalization when the input is in areal coordinates since by definition the sum of those coordinates is 1.0 everywhere.

Xerophyte fucked around with this message at 19:00 on May 9, 2020

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Rhusitaurion posted:

Many months later, I actually ended up doing something like this, using Vulkan, but I'm wondering if there's a better way than what I've done.

For each object, I allocate one large buffer that will contain the input vertices, space for computed vertices/indices, an indirect draw struct, and another SSBO with a vertex counter. Then, on each frame for each object
1. In one command buffer (not per-object), reset the vertex and index counters to 0 with fill commands
2. In another command buffer, dispatch the compute shader. It operates on the input vertex buffer and atomically increments the vertex and index count SSBOs to get indices of the output buffers to which to write vertices and indices.
3. In another command buffer, do an indirect draw call.

Then I submit the 3 command buffers with semaphores to make sure that they execute in the order above. The first submission also depends on a semaphore that the draw submission from the last frame triggers.

This seems to work fine (except when I hard lock my GPU with compute shader buffer indexing bugs), but I'm wondering if I'm doing anything obviously stupid. I could double-buffer the computed buffers, perhaps, but I'm not sure if it's worth the hassle. I thought about using events instead of semaphores, but 1. not sure if it's wise to use an event per rendered object and 2. can't use events across queues, and compute queue is not necessarily the same as graphics queue.

Thoughts?

No comment on the abstract algorithm, but there are a few technical errors here.

First, you don't need three command buffers. If you're only using a single queue, which is probably the case, you only need one command buffer for the entire frame. Semaphores are only used for synchronizing presentation and operations that span multiple queues. Note that you don't need to use a dedicated compute queue just because it's there; the graphics queue is guaranteed to support compute operations, and for work that your frame is blocked on it's the right place. Events definitely aren't appropriate. What you need here is a memory barrier between your writes and the following reads, and between your reads and the following writes. Without suitable barriers your code is unsound, even if it appears to work in a particular case.

Second, maybe I misunderstood but it sounds like you're zeroing out memory, then immediately overwriting it? That's not necessary.

Third, a single global atomic will probably serialize your compute operations, severely compromising performance. Solutions to this can get pretty complex; maybe look into a parallel prefix sum scheme to allocate vertex space.

A separate set of buffers per frame is a good idea, because it will allow one frame's vertex state to be pipelined with the next frame's compute stage.

Ralith fucked around with this message at 04:10 on May 8, 2020

Rhusitaurion
Sep 16, 2003

One never knows, do one?

Ralith posted:

First, you don't need three command buffers. If you're only using a single queue, which is probably the case, you only need one command buffer for the entire frame. Semaphores are only used for synchronizing presentation and operations that span multiple queues. Note that you don't need to use a dedicated compute queue just because it's there; the graphics queue is guaranteed to support compute operations, and for work that your frame is blocked on it's the right place.
Not sure why I didn't realize this earlier. It does make things easier.

quote:

Events definitely aren't appropriate. What you need here is a memory barrier between your writes and the following reads, and between your reads and the following writes. Without suitable barriers your code is unsound, even if it appears to work in a particular case.
Now that it seems like I should be using a single queue and command buffer, memory barriers definitely make sense. I was thinking about events because they would allow the work to be interleaved at the most granular level between the different stages, but I see now that barriers should allow the same thing.

quote:

Second, maybe I misunderstood but it sounds like you're zeroing out memory, then immediately overwriting it? That's not necessary.
Yeah I realize now that I didn't explain this well. The compute stage treats an indirect draw struct's indexCount as an atomic, to "allocate" space in a buffer to write index data in. That index data changes per-frame, so I have to re-zero the counter before each compute dispatch. There's also another atomic that works the same way for the vertex data that the indices index. Is there some other way to reset or avoid resetting these?

quote:

Third, a single global atomic will probably serialize your compute operations, severely compromising performance. Solutions to this can get pretty complex; maybe look into a parallel prefix sum scheme to allocate vertex space.
Well, it's 2 atomics per object, but yeah, it's probably not great. Thanks for the pointer. I'll look into it, but it sounds complicated so the current solution may remain in place for a while...

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Rhusitaurion posted:

Yeah I realize now that I didn't explain this well. The compute stage treats an indirect draw struct's indexCount as an atomic, to "allocate" space in a buffer to write index data in. That index data changes per-frame, so I have to re-zero the counter before each compute dispatch. There's also another atomic that works the same way for the vertex data that the indices index. Is there some other way to reset or avoid resetting these?
Oh, for some reason I read buffers where you wrote counters. Yeah, that makes sense.

[quote="Rhusitaurion" post=""504705630"]
Well, it's 2 atomics per object, but yeah, it's probably not great. Thanks for the pointer. I'll look into it, but it sounds complicated so the current solution may remain in place for a while...
[/quote]
Yeah, it's a whole big complicated thing, don't blame you for punting it. It'd be nice if there was reusable code for this somewhere, but reusable abstractions in glsl are hard.

Rhusitaurion
Sep 16, 2003

One never knows, do one?
Dumb question about memory barriers - this page says that no GPU gives a poo poo about VkBufferMemoryBarrier vs. VkMemoryBarrier. This seems to imply that if I use a VkBufferMemoryBarrier per object to synchronize reset->compute->draw, it will be implemented as a global barrier, so I might as well just do all resets, then all computes, then all draws with global barriers in between. But as far as I can tell, this is essentially what my semaphore solution is currently accomplishing, since semaphores work like a full memory barrier.

Is that post full of poo poo, or can I use VkBufferMemoryBarriers as they seem to be intended, i.e. to provide fine-grain synchronization?

Rhusitaurion fucked around with this message at 19:46 on May 8, 2020

Adbot
ADBOT LOVES YOU

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

Rhusitaurion posted:

Dumb question about memory barriers - this page says that no GPU gives a poo poo about VkBufferMemoryBarrier vs. VkMemoryBarrier. This seems to imply that if I use a VkBufferMemoryBarrier per object to synchronize reset->compute->draw, it will be implemented as a global barrier, so I might as well just do all resets, then all computes, then all draws with global barriers in between.
I don't have any special knowledge about implementation behavior (if you really want to know, you can go spelunking in AMD's or Intel's open source drivers), but I've heard similar things. You should be structuring things phase-by-phase like that regardless to reduce pipeline switching, of course.

Rhusitaurion posted:

But as far as I can tell, this is essentially what my semaphore solution is currently accomplishing, since semaphores work like a full memory barrier.
Semaphores introduce an execution dependency, not a memory barrier. You cannot use semaphores as a substitute for memory barriers under any circumstances. For operations that span queues you need both; for operations on a single queue, semaphores aren't useful.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply