Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Brownie
Jul 21, 2007
The Croatian Sensation
Hey guys, I'm interested in switching careers from Web Development (which I've been doing for ~2.5 years professionally) to graphics/engine development. Hopefully this is a good place to ask for some insight into how best to do that.

I'm trying to figure out what sort of project I should take on that would be both an effective way to force myself to learn and results in something I can show off as a portfolio piece. The types of things that get me really excited are things like: subsurface scattering, GPU particle systems, deferred rendering of glass etc. I'm currently just trying to learn a bit of Vulkan and OpenGL (I also have to decide which one to commit myself more fully to :smithicide:).

The options to me are basically:
1) Build a small game using Unreal/Unity/Godot
2) Try building my own "engine" and render some small, barely interactive area
3) Contribute to an open source project and try to grab tasks related to rendering

The problem with #1 is that my impression of Unreal/Unity is that their rendering pipelines are largely out of your control and already come with stuff like PBR, post-processing etc. so I wouldn't actually be able to learn/demonstrate the things I'm most interested in. But maybe that's just incorrect, or is less important than having familiarity with the game engine.

On the other hand, the problem with #2 is that game engines are huge and even creating a minimal example is a ton of work, especially once you want to do anything other than load all your buffers onto the GPU and move the camera around. I'm still happy to do the work but I'm just afraid of spending a lot of time re-inventing the simplest wheel instead of actually producing more current and interesting effects. The other problem is that I'm not sure what people would be most interested in seeing -- a scene graph implementation? A solid post-processing pipeline? A working deferred renderer? With enough time I'd love to do all of these things but I feel like I need to manage my goals and time, especially starting out.

With #3 it feels like it'd be premature to become a domain expert in some OSS engine instead of having more experience with all the different parts of a GL/Vulkan pipeline.

More background:
I've already written a few WebGL projects and stuff using a high-level wrapper library that provides a scene graph implementation and abstracts away almost all the actual WebGL API calls. I have a physics background and I've taken a graphics course at the local university so I'm pretty comfortable with the math involved (although if you asked me to write the projection transformation matrix I'd still have to look it up).

Adbot
ADBOT LOVES YOU

Brownie
Jul 21, 2007
The Croatian Sensation
Thanks a lot for the help everyone. I think I'll look into unreal and try to make a nice looking demo and aim to eventually help out with a PR.

Really appreciate the insight.

Brownie
Jul 21, 2007
The Croatian Sensation

dupersaurus posted:

Nice, if only I was still in the industry...

As someone wanting to do game dev but currently in web dev... this is cool as heck

Brownie
Jul 21, 2007
The Croatian Sensation
Anyone here have any experience with the Vulkan graphics API? I'm going through the tutorials at vulkan-tutorial.com to get more familiar with the API, but their code just throws everything into one big file, which is starting to give me a huge headache.

I'm trying to reason about how to organize my code, but the way that resources are managed in Vulkan means I can't just naively map Vulkan resources to my own classes and rely on RAII. Most objects are de-allocated using the logical device, which means that I'd need to store the device handle for each 'child' object. Maybe this is still the way to go?

The alternative is to just rely on the 'context' instance to actually properly call cleanup on all the resources it owns -- but still encapsulating all the creation and cleanup logic in custom classes.

Does anyone have any good resources/advice on managing resources like this? I guess this isn't a Vulkan specific problem -- you'd run into the same problem with OpenGL I think.

Brownie
Jul 21, 2007
The Croatian Sensation

Absurd Alhazred posted:

I've had a bit of experience with Vulkan. The tutorials are really obnoxious in exactly the way that you express, but what I've found useful is to slowly build up with the tutorials, but to use the classes in vulkan.hpp instead of vulkan.h. Here's the page explaining it, it is part of the standard Vulkan SDK, and has some forms of RAII, although be aware that some resources you're getting out of a pool rather than managing them directly.

I don't know that OpenGL has a similar framework - in essence, we've had to create our own RAII wrappers.

Hmmm I've been using vulkan.hpp, but I haven't been using their Unique objects that provide automatic destruction.

Maybe I'll just use those and write a bunch of different factory functions.

Brownie
Jul 21, 2007
The Croatian Sensation
Hmm my inexperience with C++ programming patterns has quickly caught up to me and I'm looking for some style advice.

I posted earlier about refactoring the vulkan-tutorial code, and I've sort of quickly run into a (now obvious) limitation with my approach. I started the refactor by moving my pipeline code into functions in its own namespace, where each function returns part of what the `GraphicsPipelineCreateInfo` struct needs. Then each of these are used in a general `createGraphicsPipeline` function. I just did this so I could more easily reason about the steps required to create one of these objects -- since there are so many steps, having them all in one giant function was pretty unpleasant and unreadable (coming from python/ruby/JS).

However, a problem is introduced by the way that Vulkan expects you to link structs using pointers. Here's an example of code that results in an access code violation (if not straight up indeterminate behaviour):

code:
namespace vkr {
	namespace GraphicsPipeline {
		// Other code...

		vk::PipelineViewportStateCreateInfo getViewportStateCreateInfo(const vk::Extent2D & swapchainExtent)
		{
			vk::Viewport viewport = {};
			viewport.x = 0.0f;
			viewport.y = 0.0f;
			viewport.width = (float)swapchainExtent.width;
			viewport.height = (float)swapchainExtent.height;
			viewport.minDepth = 0.0f;
			viewport.maxDepth = 1.0f;

			vk::Rect2D scissor = {};
			scissor.offset = { 0, 0 };
			scissor.extent = swapchainExtent;

			vk::PipelineViewportStateCreateInfo viewportState;
			viewportState.viewportCount = 1;
			viewportState.pViewports = &viewport;
			viewportState.scissorCount = 1;
			viewportState.pScissors = &scissor;
			return viewportState;
		}
		// More code...
	}
}
In case it isn't obvious (as it wasn't for me) by the time the `vk::PipelineViewportStateCreateInfo viewportState` struct gets used elsewhere, the `viewport` and `scissor` structs have already gone out of scope and those pointers have been destroyed.

So I can either go back to throwing everything into one giant function so that the structs don't go out of scope until after the pipeline is created, or I can wrap this in a class and make sure I add each struct as a member. Anyone have any alternatives/advice?

Brownie
Jul 21, 2007
The Croatian Sensation

Absurd Alhazred posted:

I think you want to do this backwards: you want to have a function that takes parameters, parses them into a vk::PipelineViewportStateCreateInfo and the related data structures it points to, calls the creation function, and then returns a unique handle.

Well that's what I'm doing, I think: https://github.com/BruOp/LearnVulkan/blob/69799bd8b5493fa2d1d2131fe7e5972e22807483/LearnVulkan/Pipeline.cpp#L100-L109

I just am not sure what the best way to organize all the intermediate steps and ensure that all the related structs stick around until the device.createGraphicsPipelineUnique call. But maybe I'm still misunderstanding.

Brownie
Jul 21, 2007
The Croatian Sensation

Suspicious Dish posted:

stop trying to clean up code before you understand what it does, and train yourself to read functions with more than 10 lines in them because you will see that a lot in c++

I'm not going to understand the code by just copying it once from a tutorial. Messing around with it is the easiest way to learn, for me. I've already gained a better understanding of what a Vulkan pipeline is responsible for because trying to split things up makes me have to worry about every line.


I think this looks like a pretty neat approach, I'll give it a go. Thanks!

EDIT: In case anyone is interested, I ended up just going with a GraphicsPipelineFactory class which stores all the create structs. The client uses it to create a UniquePipeline but the factory itself is only concerned with owning the vulkan create info structs, and provides sensible defaults for now. I'm pretty happy with how it turned out, thanks everyone!

Brownie fucked around with this message at 14:28 on Jul 11, 2018

Brownie
Jul 21, 2007
The Croatian Sensation

Obsurveyor posted:

Regarding recent Vulkan chat, PacktPub's free book of the day today is "Vulkan Cookbook". No idea how good it is but I thought there might be some interest.

Thanks for this!

Brownie
Jul 21, 2007
The Croatian Sensation
I'm writing a little renderer using bgfx, and I'm wondering what an appropriate way to abstract/model different "Materials" (the data associated with uniforms in my shader program per mesh/entity). I've tried to avoid using polymorphism + RTTI since I'm trying (perhaps stupidly) to do things the "Data Oriented Design" way, and not use inheritance + pointers since that will make it hard/impossible to ensure data locality. I get that trying to apply DOD for my toy rendered is probably stupid and unnecessary but I think this is defensible...

I don't know how to model the uniform data that's going to be associated with each entity if each shader program expects different uniforms. Currently I just have different structs for each Material, and a generic "Mesh" object that accepts the material class as a template arg, but this means that the Entity (which is associated with the Mesh) is also going to have to care about this. One approach I thought of is just having different arrays for the different types of materials and referencing them by index, but that's also not really going to enable data locality... unless the loop that issues draw calls is moving through arrays sorted by material type.

Right now I'm tempted to just say "gently caress it" and just ignore preserving the data locality for these "material" objects, but before I did that I wanted to see if I was a) making things way too complicated for myself or b) missing something totally obvious.

Brownie
Jul 21, 2007
The Croatian Sensation

Suspicious Dish posted:

Welcome to designing and building a material pipeline. :)

It depends on what you want your artists and VFX people to author in. If you want them to author in shading language, you'll need some way to parse out the shading language and get its input parameters (or match them up in the engine). If you can do this in a shader graph, you can generate the inputs and outputs and parameters yourself.

As much as you can, standardize on inputs and outputs, and only allow customizing through hooks you integrate in your base shaders. So, for a vertex shader, you might have bone matrices and a VP matrix in your uniforms, and that's basically it. Your hook might be allowing pre-bone and post-bone modifications of the vertex position.

In some of my work, I have a cbuffer/UBO named "cb_MaterialVertexParams" which contains all of the artist-controlled parameters special to that material. My shader compiler looks for a buffer of that name, re-packs the cbuffer/UBO for the target platform, and generates reflection data and controls engine-side. So when that material is chosen, those UBOs can be hooked up. Engine just fills in the data in the cbuffer in the right offsets, and it works.

This is divorced from "cb_EngineVertexParams" which contains bone/VP.

It's never *that* clean in practice, and I have several hacks littered throughout, but that's the basic gist.

How this works with bgfx, I don't know. I find that abstraction layers like that mostly get in my way, so I try to avoid them.

Hmmm this is something I didn't think about at all, but feels like an enormous project in and of itself. It made me realize how naive my hard coded set of materials + shaders system is!

Fortunately, I think for my purposes, hard coded shaders will work just fine. I've also decided I will be saying "gently caress it" and using ABC + pointers to let me use different material types.

You're right about bgfx. It provides an abstraction on the platform-specific APIs but that also means transpiling shaders and the lack of any sort of DX12/Vulkan style resource binding model for shader resources.

Thanks for the help!

Brownie
Jul 21, 2007
The Croatian Sensation

roomforthetuna posted:

Isn't healing just a type of damage where the quantity is negative and the type is "healing"?
(This way you could eg. make some kind of things resistant or especially susceptible to healing, too, assuming the purpose of the 'type' in damage is resistances/vulnerabilities.)

I was going to suggest the same thing. Divinity 2 does something like this, which allows you to damage the undead through healing spells which is fun.

Adbot
ADBOT LOVES YOU

Brownie
Jul 21, 2007
The Croatian Sensation

jizzy sillage posted:

Unreal can be a bit trickier than Unity for stylised/non-PBR art styles. You generally have two paths available:

a) Default engine. Your work will mostly live in the Post Process Materials. These can be in the render pipeline in a few places (before/after translucency, before tonemapper, replaces tonemapper). You're limited to access render buffers already set up by the engine, so some effects become very hacky.

b) Custom engine. You can add or remove whatever render passes and buffers you want. World is your oyster but you need a strong understanding of rendering to go this route.

But there's also:

c) Secret third option. There's actually way to do custom rendering stuff that inserts into the render pipeline, the magic words here are SceneViewExtension.

If you're at a company working on a large project, it's strongly recommended you use a custom engine build anyway, so you may be comfortable with either B or C, especially if you have a tech art background and can write C++ and HLSL.

Guilty Gear Strive did some really slick cel shading in Unreal without needing any custom render passes as far as I know - you can just plug any material into the emissive output and it'll ignore lighting information, so all lighting can then be faked by you.

Yeah the major caveat is that you won’t have any lighting information available to you inside the shader graph (except the directional light direction, I think) so if you still want to be using point and spot lights you’ll have to either build you own SceneViewExtension that replaces their lighting pass or just overhaul the renderer completely. Either way I believe you’ll need source modifications?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply