Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bongo Bill
Jan 17, 2012

"Entity Component System" is so named because it has things called Entities, things called Components, and things called Systems. For this reason, to avoid confusion, I suggest taking care not to refer to an ECS library, implementation, architecture, etc. as a "system," even if it is a system. Lots of things are systems.

Adbot
ADBOT LOVES YOU

Megafunk
Oct 19, 2010

YEAH!

Ranzear posted:



My ideal is more of an Entity+Tag+Behavior system. Sounds like ECS with extra steps, but ECS iterates entities to find components to execute behaviors while I want to flip that around and iterate behaviors to find entities (by tags in basically a publish/subscribe registration) that want to execute them. This is even still separate from the effect chains I mentioned in the prior post which are just a stack of interactions that might form one 'behavior' with multiple inputs or interrupts.

Tags work as both an 'invoke me for x behavior' and 'I am a potential target for x or y behavior'. Also an entity can have varying presence in different layers of the simulation, even different hitboxes for stuff like physics vs contact checks, and tags control those interaction layers as well.


I wonder if this unique "local systems" ECS idea would appeal to you. It's a long video but he goes into his new ideas in the later sections.
https://youtu.be/jjEsB611kxs

The gist is each entity updates it's own set of systems instead of the "global" conventional ecs querying loops etc. Of course, global systems can still be there.

Megafunk
Oct 19, 2010

YEAH!
And yeah, in the new UE5 ECS (Mass) they opted to call systems Processors and components Fragments to avoid confusion. It's fairly similar to the early days of DOTS but it's already been used in their Matrix demo. It is designed to live alongside actors rather than replacing them if that assuages fears of the DOTS debacle.

Ranzear
Jul 25, 2013

Chainclaw posted:

The problem with Entity Component Systems and entity systems with components is they share a lot of terminology but aren't really related.

In as much as I've experienced and understood, Unity and Godot do a lot more of the latter than the former and that context was rather important to what I wanted to convey. They don't really expose the 'system' side of things in a useful way to many detriments.

I'm currently writing the system I described in Rust and just used it as a pointer on how Unity mucks up temporality so much that every game made in it ends up with the forbidden behavior singleton eventually.

But yes, 'let the behavior/system find the entities' does read like how an ECS should work just like how my isolation of the simulation model and sending only display data for the client to view and receiving only remapped controller inputs back across the server/client gap might sound familiar too. I'm not claiming to have invented anything new.

Ranzear fucked around with this message at 23:34 on Oct 28, 2022

Megafunk
Oct 19, 2010

YEAH!
I didn't mean to accuse you of pretending to invent that pattern, I just thought you might enjoy the video.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Ranzear posted:

In as much as I've experienced and understood, Unity and Godot do a lot more of the latter than the former and that context was rather important to what I wanted to convey. They don't really expose the 'system' side of things in a useful way to many detriments.

I'm currently writing the system I described in Rust and just used it as a pointer on how Unity mucks up temporality so much that every game made in it ends up with the forbidden behavior singleton eventually.

But yes, 'let the behavior/system find the entities' does read like how an ECS should work just like how my isolation of the simulation model and sending only display data for the client to view and receiving only remapped controller inputs back across the server/client gap might sound familiar too. I'm not claiming to have invented anything new.

Unity has an ECS that behaves nothing like you implied it behaved. And isn't central to the general operation of the engine in any way. You're basically just spewing FUD

Ranzear
Jul 25, 2013

Megafunk posted:

I didn't mean to accuse you of pretending to invent that pattern, I just thought you might enjoy the video.

It's over two hours long, understandably I didn't get to it yet, lol.

leper khan posted:

Unity has an ECS that behaves nothing like you implied it behaved. And isn't central to the general operation of the engine in any way. You're basically just spewing FUD

Maybe a little. I've only had exposure to mostly bad Unity development and pre-3.0 Godot.

Ranzear fucked around with this message at 23:44 on Oct 28, 2022

Chainclaw
Feb 14, 2009

Unity has a system where you use entities and components, and Unity also has an Entity Component System.

They are different. Yes the terminology is stupid.

another fun overloaded term is "actors". A lot of game engines refer to different things as actors. Some it's the same as a Unity entity. Others, it's a unique special thing meant for skinned meshes. These are all unrelated to the Actor Model used outside of games. https://en.wikipedia.org/wiki/Actor_model

My personal bugaboo the last few years has been people using AR for a bunch of different things. So I have no idea if they're talking about an automated review system, augmented reality, etc.

Chainclaw fucked around with this message at 23:54 on Oct 28, 2022

Ranzear
Jul 25, 2013

Uh, was this particular ECS you're talking about added in 2020? Because like, 'they fixed it!' would be a nice response even though I never really gave a timeframe of my experience. I also happen to know Rocko's project is at least older than that but they may have kept up, if migrating it to DOTS was even possible.

Hell, I had to twist arms to get at least an upgrade to an LTS version when that was a new concept.

Actor model is just smalltalk-era OOP paradigms to me. Same class that bashed some smalltalk into me also introduced me to rust.

Ranzear fucked around with this message at 00:07 on Oct 29, 2022

more falafel please
Feb 26, 2005

forums poster

Megafunk posted:

And yeah, in the new UE5 ECS (Mass) they opted to call systems Processors and components Fragments to avoid confusion. It's fairly similar to the early days of DOTS but it's already been used in their Matrix demo. It is designed to live alongside actors rather than replacing them if that assuages fears of the DOTS debacle.

Makes sense that they call them Processors and Fragments considering they already have long-standing things called Systems and Components.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

more falafel please posted:

Makes sense that they call them Processors and Fragments considering they already have long-standing things called Systems and Components.
Processors is almost an actually meaningful name, too. Better than Systems. Fragments and Components are both not very good names in that they're just synonyms of "parts" with no suggestion of what their job is.

Ranzear
Jul 25, 2013

quote:

There are only two hard things in Computer Science: cache invalidation and naming things.

-- Phil Karlton

Today I learned the original quote never had "... and off-by-one errors."

Megafunk posted:

I wonder if this unique "local systems" ECS idea would appeal to you. It's a long video but he goes into his new ideas in the later sections.
https://youtu.be/jjEsB611kxs

The gist is each entity updates it's own set of systems instead of the "global" conventional ecs querying loops etc. Of course, global systems can still be there.

Good video. Putting the followup on next. Some of his tweaks are very close to mine. What he describes as proxies are the most critical purpose of my tags, pointers directly into entity data, but it seems his global systems are still run across all entities whereas my tags will register only the relevant subset for a given behavior to iterate over; his proxies just give a contiguous view.

I don't have the local systems at all, nor even per-component logic. My tagging system ideally(!!) turns every system into a local system. A tag might have additional data but can be as simple as boolean by existing, which I suppose is the canonical description of a component but I more simply haven't decided yet on whether I need to split them into two kinds: static data versus dynamic flags. Making them both leans well to the pointer-passing ease of it all and is what has made them more component-like.

My behaviors are supposed to be much more generic too, more akin to ye olde Starcraft map triggers than any sort of whole game system. Ordered instructions like "Add tag x, decrement tag y by delta if present, if y present and ≤ 0 remove tag y now and add w which will remove z later" should be able to handle a metric buttload of stuff without touching code at all. One more bit of logic over top of this is easy declarations that behavior k must finish before behavior j can start (and k can be just an empty behavior that requires l+m+n+o+p have finished for easy composition).

With this I plan to have tags and behaviors be much more granular, such as:
  • has poison_ticks: decrement poison_ticks.timer, if negative add poison_ticks.tick_time to .timer and add poison_damage tag inheriting poison_ticks.damage value
  • has poison: decrement poison.timer, if ≤ zero add poison_clear tag
Rather than a monolithic poison system, these just run agnostically. Both behaviors in this system are very simple increment/decrement value, boolean comparison, add/remove tag logic and are the same code path used in different ways. Note that it does not even matter what order those two behaviors run in. Removing the poison runs off that poison_clear tag in a later generic behavior, so any other game system that looks for the poison tag (say: bonus damage from other specific attacks) only has to run before that behavior, and doesn't affect either of the above until then. This is the part that I think solves Rocko's original question and I did convey it really badly (flu symptoms aren't an excuse).

It might be more clear now how multiple tags might make a component and multiple behaviors might make a system, or maybe the whole thing is a 'more true' ECS model. I'm just doing it the way I think is sensible and useful and leads to predictable behavior and, bonus, will serialize into easily edited markup files.

Something I'm actually stuck on is whether tags should have some arithmetic behaviors. For instance, a generic incoming damage tag might get added by multiple sources and should maybe totalize automatically, or maybe I'll just find a way to allow multiple instances of the same tag with some hidden indexing for stuff like poison_clear to only remove the one that invoked it. Rubber-ducking it here I realize I probably need both; some systems may stack instances while other add to the value.

We have very different environments and endgame goals. He has hard-threaded everything and strict parallelism goes a long way while he builds high entity count single-scene games and eschews networking entirely. I'm using async (tokio) and aiming for relatively few entities but several to many scenes in parallel in a strictly networked context with nothing to render locally, so I have a bit more wiggle room for mutexes as necessary with lots of work stealing. I wouldn't be able to do such sparse and narrow behaviors on fixed threads and affinities.

I really like AI 'stims' as a term and I'll steal it from him when I write my vector junk. AI will run on this same framework but I can decouple it from the main simulation tick rate. AI will still see the fresh entity data, it'll just be running less often and allowed to be crunchier with its own entity pool.

I haven't dug into spatial hierarchy yet, but I think our approach is the same. Anybody who leads a spatial hierarchy definition with the tank example is cool in my book. When I need a skeletal tree with per-bone colliders and stuff it'll all be on a single entity, but this side of things doesn't render so that'd just be for locational damage in an fps or fighting game (so it will still have to be aware of animation poses for instance).

My secret sauce is that the client runs none of this. I'm shoving it raw data of entity positions and animation states and whatever with looked-ahead deltas for interpolation between updates. The client can render however it likes, and additionally this gives me information control: they can't see or even guess at what the server doesn't tell them.

Ranzear fucked around with this message at 23:07 on Oct 29, 2022

Kaedric
Sep 5, 2000


Considering your obvious depth of knowledge here you've likely already considered/discarded this, but: I think you could do what you're proposing with a simple event bus. Just publish events (such as phases in MTG in your example), and have subscribers react and do what they need to do at that point. "Tags" would essentially just be a curated list of subscribers.

I'm sure there's a lot I haven't thought of here that makes this untenable.

Ranzear
Jul 25, 2013

Tags can exist as just data without being registered to a behavior. Nothing yet prevents a behavior from interacting with other tags either. Haven't decided how it'll bail nicely if one isn't present but I want to subvert having dependencies, so maybe a defaulting system.

Suppose I want a deck-random system per player entity, it would just store RNG state in a tag and behaviors would call for it on demand.

Ranzear fucked around with this message at 06:01 on Oct 30, 2022

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Wait are you talking about Unity tags that are just strings?

This is what I get for asking an architectural moon question and then get a sinus infection.

Ranzear
Jul 25, 2013

I'm talking about my own engine. Your question did prompt me to look twice at and fix some stuff though.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Well I'm sure I'll have this sinus infection tomorrow so look forward to the bugs I'll find.

Fuschia tude
Dec 26, 2004

THUNDERDOME LOSER 2019

Ranzear posted:

Today I learned the original quote never had "... and off-by-one errors."

Tim Bray says that was added later by someone else.

https://twitter.com/dkarlton/status/895871336941211648
https://twitter.com/timbray/status/506146595650699264

And this guy claims to be the source:
https://twitter.com/secretGeek/status/552779013890904064

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Lmao the off by one joke has been around since at least mid 2000s

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Gotta add "date handling" to the joke I guess.

Grace Baiting
Jul 20, 2012

Audi famam illius;
Cucurrit quaeque
Tetigit destruens.



Jabor posted:

Gotta add "date handling" to the joke I guess.

* Excel has entered the chat *

pseudorandom name
May 6, 2007

Date handling is a subset of naming things.

more falafel please
Feb 26, 2005

forums poster

leper khan posted:

Lmao the off by one joke has been around since at least mid 2000s

Yeah, I was gonna say, I heard the off-by-one part way before I heard the cache invalidation part, if anything I always assumed that was the original joke

Raenir Salazar
Nov 5, 2010

College Slice
So I made a compute shader to try to draw a delaunay triangulation to a output texture.

In general my approach is given a pixel in my 2D texture space, I want to know if it should be
coloured; and it should be coloured if and only if it is on the line between two delaunay points.

So given a list of edges for every pixel I iterate through the list of edges and check if its on the line.

After much trial and error and googling various formulas for this, I came upon an approach that checks
if a given point is within a certain distance of the line. This works pretty good.

All other approaches like checking if distance(A,C) + distance(B,C) <= (A,B) + Epsilon etc would miss something
like 90% of all line pixels. My assumption is since the pixel coordinates are integers, the results aren't mapping
correctly to the 2D grid.

The main flaw in the below approach is there's some honestly tolerable inaccuracy in that the lines in places are
too thick.

I'd ideally like much more accurate lines, like "supercover" lines in particular, but not sure how to accomplish this in a compute shader; my current implementation below:

code:
bool IsPixelOnLine(float2 A, float2 B, int2 C)
{
    const float tolerance = 0.5f;

    float minX = min(A.x, B.x) - tolerance;
    float maxX = max(A.x, B.x) + tolerance;
    float minY = min(A.y, B.y) - tolerance;
    float maxY = max(A.y, B.y) + tolerance;

    //Check C is within the bounds of the line
    if (C.x >= maxX || C.x <= minX || C.y <= minY || C.y >= maxY)
    {
        return false;
    }

    // Check for when AB is vertical
    if (A.x == B.x)
    {
        if (abs(A.x - C.x) >= tolerance)
        {
            return false;
        }
        return true;
    }

    // Check for when AB is horizontal
    if (A.y == B.y)
    {
        if (abs(A.y - C.y) >= tolerance)
        {
            return false;
        }
        return true;
    }

    // Check istance of the point form the line
    float distFromLine = abs(((B.x - A.x) * (A.y - C.y)) - ((A.x - C.x) * (B.y - A.y))) / sqrt((B.x - A.x) * (B.x - A.x) + (B.y - A.y) * (B.y - A.y));

    if (distFromLine >= tolerance)
    {
        return false;
    }
    else
    {
        return true;
    }
}

[numthreads(16, 16, 1)]
void DrawDelaunayTexture(uint3 id : SV_DispatchThreadID)
{
    if (id.x > Dims.x || id.y > Dims.y)
    {
        return;
    }

    float4 pxColour = float4(0, 0, 0, 1); // black by default

    for (uint di = 0; di < Sizes.y; di++)
    {
        float2 PointA = Edges[di].startPoint;
        float2 PointB = Edges[di].endPoint;
        int2 Point = id.xy;
        if (IsPixelOnLine(PointA, PointB, Point))
        {
            pxColour = float4(1, 1, 1, 1);
            break;
        }
    }

    DelaunayTexOutput[id.xy] = pxColour;
Another approach might be to go through the supercover line algorithm specified in the url I linked to but not to draw the line, but to check if at any point it reaches the current pixel, and if so colour that pixel. But that seems like a lot of loops for the gpu.

And I am aware I probably shouldn't be looping through 600+ edges to see if the current pixel is relevant to any of them; I should probably pre-compute that sort of spatial information and pass it in but I wanted to test things out first. :)

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Have you considered antialiasing your lines? Instead of having a hard "close to a line -> set colour to black; not close enough -> do nothing", you'd set a shade of grey based on how close the pixel is to the line.

Raenir Salazar
Nov 5, 2010

College Slice

Jabor posted:

Have you considered antialiasing your lines? Instead of having a hard "close to a line -> set colour to black; not close enough -> do nothing", you'd set a shade of grey based on how close the pixel is to the line.

Hrm, well, my goal is to eventually be generating a polygon based procedural world map (finally got back to this project since where I last left off in 2021!!! :dance:) so the goal is not necessarily nice lines in of themselves, I just wanted to test displaying the results to a texture where it'd be nice if the lines happened to be of a certain type. Last time I draw lines to a texture (using the CPU; which honestly is probably not that slow? Might be worth re-examining! Since the number of length of lines might be reasonably finite relative to the area of a texture?), I found that "supercover" lines had the least imperfections for displaying voronoi boundaries without "doubling over" certain boundaries making them appear thicker than they should.

For testing purposes I think its a little clearer if something's wrong if there's a clear dilineation between pixels (which is convenient to take the output texture into Paint.net and using the Wand tool!), and anti-aliasing them might make certain errors blend outside of my ability to spot check things.

e: Although it occurs to me it might be easier to figure out a way of Blit'ing a Wireframe mesh to a Texture; my first attempt failed in that it blit the full rendered mesh (as a weirdly shaped concave looking plane) to a texture when I want the wireframe; maybe there's an easy way to do that because then I can also wireframe the voronoi mesh and just use the compute shader for flood filling.

Raenir Salazar fucked around with this message at 03:18 on Nov 2, 2022

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Raenir Salazar posted:

Hrm, well, my goal is to eventually be generating a polygon based procedural world map (finally got back to this project since where I last left off in 2021!!! :dance:) so the goal is not necessarily nice lines in of themselves, I just wanted to test displaying the results to a texture where it'd be nice if the lines happened to be of a certain type. Last time I draw lines to a texture (using the CPU; which honestly is probably not that slow? Might be worth re-examining! Since the number of length of lines might be reasonably finite relative to the area of a texture?)
Yeah, if it's a one-off generate-stuff operation, drawing 600 lines on CPU with a bresenham algorithm is probably just as good as doing 600 operations per pixel in the whole rectangle on the GPU (the increased parallelism isn't really giving you much when you're using it to do hundreds of times more operations).
But what about drawing the lines on the GPU using line-drawing or triangle-pairs? It's super optimized for that poo poo.
Edit: like you said in your edit while I was saying this. :colbert:

Raenir Salazar
Nov 5, 2010

College Slice

roomforthetuna posted:

Yeah, if it's a one-off generate-stuff operation, drawing 600 lines on CPU with a bresenham algorithm is probably just as good as doing 600 operations per pixel in the whole rectangle on the GPU (the increased parallelism isn't really giving you much when you're using it to do hundreds of times more operations).
But what about drawing the lines on the GPU using line-drawing or triangle-pairs? It's super optimized for that poo poo.
Edit: like you said in your edit while I was saying this. :colbert:

Yeah as this is just testing while in my actual use case it will likely be over 10,000 lines. I'll revisit trying to do this by blit'ing a mesh to a rendertexture and maybe there's a flag to blit the wireframe.

Boldor
Sep 4, 2004
King of the Yeeks
Arcane question, I'm not sure where to ask.

I'm reverse-engineering old computer games, primarily on x86 or possibly x64 architecture (DOS/Windows), but 65xx (Apple ][, Commodore 64) is also on the table. (How old: 10+ years bare minimum, mostly 25+, possibly 40+.)

What I'm trying to crack is the random number generator, which is key to many older games. This long ago it was common for programmers to roll their own RNG (believe it or not, this was actually a good idea in C before 1997) but far from universal. I'm not sure where I can find previously-done work on this, however; this is difficult enough work that I'm trying to avoid reinventing the wheel.

All resources I know of are either primarily hardware-focused (like vogons.org), or are concerned with adventure games which mostly don't use RNG (like the ScummVM forums).

Is there such a thing?

I don't believe this exists for DOS/Windows (you can easily find it for specific console games, like the original Final Fantasy), because all sources I can find are not reliable enough for my purposes. I'm currently looking at the Turbo Pascal library, which as far as I know is best described in The Random Number Generators of the Turbo Pascal Family (1991). That's reasonably accurate but still has some mistakes and omissions, and anything else I know of (Wikipedia, or any other academic paper I could find) is worse.

If that doesn't exist, I'm interested in curating a list of RNGs .. except I'm not sure where to put my work, aside from "my own Github repository, or Web page". What forum might be most interested?

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
You might ask this question on the tasvideos.org forums. There's a lot of retrocomputing folks there, and while they're primarily interested in old consoles, that doesn't invalidate their knowledge base. And they may well have connections to other communities that know more.

tango alpha delta
Sep 9, 2011

Ask me about my wealthy lifestyle and passive income! I love bragging about my wealth to my lessers! My opinions are more valid because I have more money than you! Stealing the fruits of the labor of the working class is okay, so long as you don't do it using crypto. More money = better than!

Boldor posted:

Arcane question, I'm not sure where to ask.

I'm reverse-engineering old computer games, primarily on x86 or possibly x64 architecture (DOS/Windows), but 65xx (Apple ][, Commodore 64) is also on the table. (How old: 10+ years bare minimum, mostly 25+, possibly 40+.)

What I'm trying to crack is the random number generator, which is key to many older games. This long ago it was common for programmers to roll their own RNG (believe it or not, this was actually a good idea in C before 1997) but far from universal. I'm not sure where I can find previously-done work on this, however; this is difficult enough work that I'm trying to avoid reinventing the wheel.

All resources I know of are either primarily hardware-focused (like vogons.org), or are concerned with adventure games which mostly don't use RNG (like the ScummVM forums).

Is there such a thing?

I don't believe this exists for DOS/Windows (you can easily find it for specific console games, like the original Final Fantasy), because all sources I can find are not reliable enough for my purposes. I'm currently looking at the Turbo Pascal library, which as far as I know is best described in The Random Number Generators of the Turbo Pascal Family (1991). That's reasonably accurate but still has some mistakes and omissions, and anything else I know of (Wikipedia, or any other academic paper I could find) is worse.

If that doesn't exist, I'm interested in curating a list of RNGs .. except I'm not sure where to put my work, aside from "my own Github repository, or Web page". What forum might be most interested?

In the old days on the Commodore 64 we would generate a random seed using the 6581 sound chip white noise waveform, which is literally random numbers that you can hear. So if you can find a white noise generator you’ll be able to spin up pseudo random numbers pretty easily.

Raenir Salazar
Nov 5, 2010

College Slice

roomforthetuna posted:

Yeah, if it's a one-off generate-stuff operation, drawing 600 lines on CPU with a bresenham algorithm is probably just as good as doing 600 operations per pixel in the whole rectangle on the GPU (the increased parallelism isn't really giving you much when you're using it to do hundreds of times more operations).
But what about drawing the lines on the GPU using line-drawing or triangle-pairs? It's super optimized for that poo poo.
Edit: like you said in your edit while I was saying this. :colbert:

Status Report: Success. :cool:



All it needed was setting GL.wireframe = true.

code:
    public static RenderTexture BlitDelaunayMapToRT(UnityEngine.Mesh InMesh, Vector2Int InMapSizes, Material InMat)
    {
        RenderTexture outRTex = new RenderTexture(InMapSizes.x, InMapSizes.y, 0);

        Nothke.Utils.RTUtils.BeginPixelRendering(outRTex);
        {
            GL.Clear(true, true, Color.black);
            GL.wireframe = true; // my change to this Blit'ing library I found online that does it without needing a camera.

            Nothke.Utils.RTUtils.DrawMesh(outRTex, InMesh, InMat, Matrix4x4.TRS(Vector3.zero, Quaternion.identity, Vector3.one));
        }
        Nothke.Utils.RTUtils.EndRendering(outRTex);

        GL.wireframe = false;

        return outRTex;
    }
I timed it and for 1600 poisson distributed sites using the compute shader took I think 140ms; while the blit method seems to take 0 time, I tried timing it in nano seconds and still got 0. I guess it was one basically instantly? :aaa:

e: although tbf some of the time in the compute shader function is setting up the edges to be passed in; I'd imagine most of that 140 number is the edge collection.

Raenir Salazar fucked around with this message at 23:45 on Nov 2, 2022

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
I have a CoolShots system in my game that takes automatic screenshots when interesting things happen, to show to the player later. This system needs to not draw the HUD, so I turn it off before its camera renders, and then back on afterwards. I've noticed though that sometimes the HUD visibly flickers, which implies that the main camera is also rendering when the CoolShots camera is rendering. Or something.

The CoolShots render code is pretty simple:
code:
        bool hudOn = ToggleableHUD.isHUDOn;
        if (hudOn) {
            GlobalPubSub.Publish(new ToggleHUDEvent(false));
        }
        var current = RenderTexture.active;
        RenderTexture.active = cam.targetTexture;
        cam.Render();
        RenderTexture.active = current;
        if (hudOn) {
            GlobalPubSub.Publish(new ToggleHUDEvent(true));
        }
`cam` here is the CoolShots camera. I slapped some debug code together to log every time OnPostRender fires, and it confirms that the main camera isn't rendering in-between when the HUD is disabled and when it's enabled. I've also been unable to reproduce this in builds, though admittedly I haven't made a huge dedicated effort...has anyone else experienced editor-only issues like this?

Raenir Salazar
Nov 5, 2010

College Slice

TooMuchAbstraction posted:

I have a CoolShots system in my game that takes automatic screenshots when interesting things happen, to show to the player later. This system needs to not draw the HUD, so I turn it off before its camera renders, and then back on afterwards. I've noticed though that sometimes the HUD visibly flickers, which implies that the main camera is also rendering when the CoolShots camera is rendering. Or something.

The CoolShots render code is pretty simple:
code:
        bool hudOn = ToggleableHUD.isHUDOn;
        if (hudOn) {
            GlobalPubSub.Publish(new ToggleHUDEvent(false));
        }
        var current = RenderTexture.active;
        RenderTexture.active = cam.targetTexture;
        cam.Render();
        RenderTexture.active = current;
        if (hudOn) {
            GlobalPubSub.Publish(new ToggleHUDEvent(true));
        }
`cam` here is the CoolShots camera. I slapped some debug code together to log every time OnPostRender fires, and it confirms that the main camera isn't rendering in-between when the HUD is disabled and when it's enabled. I've also been unable to reproduce this in builds, though admittedly I haven't made a huge dedicated effort...has anyone else experienced editor-only issues like this?

Interestingly enough I saw something similar to this in the comments for the library I'm using:

code:
        public static void BeginRendering(this RenderTexture rt, Matrix4x4 projectionMatrix)
        {
            // This fixes flickering (by @guycalledfrank)
            // (because there's some switching back and forth between cameras, I don't fully understand)
            if (Camera.current != null)
                projectionMatrix *= Camera.current.worldToCameraMatrix.inverse;

            // Remember the current texture and make our own active
            prevRT = RenderTexture.active;
            RenderTexture.active = rt;

            // Push the projection matrix
            GL.PushMatrix();
            GL.LoadProjectionMatrix(projectionMatrix);
        }
Does this help?

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
I don't think so...that looks like it's doing some kind of turbo-manual rendering where the projection matrix is being set manually, and then not cleared reliably. I'm doing straightforward renders, I just need certain parts of the game to be disabled when I do.

(it sure would be nice if I could just put those parts on a layer that's invisible to the CoolShots camera, but Screen Space: Overlay canvases ignore that whole system :shepface:)

Raenir Salazar
Nov 5, 2010

College Slice
So I got a voronoi mesh now, but the results aren't ideal; in particular the voronoi mesh has a lot of outgoing edges along the outermost perimeter of the mesh; so there's parts where there's like several tiny adjacent voronoi cells.



While I am aiming for something more like this.



:sigh:

It isn't exactly clear to me what settings or parameters I'm getting wrong.


TooMuchAbstraction posted:

I don't think so...that looks like it's doing some kind of turbo-manual rendering where the projection matrix is being set manually, and then not cleared reliably. I'm doing straightforward renders, I just need certain parts of the game to be disabled when I do.

(it sure would be nice if I could just put those parts on a layer that's invisible to the CoolShots camera, but Screen Space: Overlay canvases ignore that whole system :shepface:)

Maybe, but here's the full script for context in case it does turn out to be useful!

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Raenir Salazar posted:

So I got a voronoi mesh now, but the results aren't ideal; in particular the voronoi mesh has a lot of outgoing edges along the outermost perimeter of the mesh; so there's parts where there's like several tiny adjacent voronoi cells.



While I am aiming for something more like this.



:sigh:

It isn't exactly clear to me what settings or parameters I'm getting wrong.

Maybe, but here's the full script for context in case it does turn out to be useful!

Probably easiest to push the edge nodes to the edges

Raenir Salazar
Nov 5, 2010

College Slice

leper khan posted:

Probably easiest to push the edge nodes to the edges

Nono that isn't the problem, they just happen to not extend all the way because I was playing with padding and Boundary settings; the problem is how many outgoing edges are close to each other; going all the way to the edge wouldn't change that those regions are very narrow (or very tiny if I clipped them off properly)! While I want every cell to basically look like any other cell.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
OK, this is a weird issue: I have to restart Unity to get it to actually pick up changes to my preprocessor. I have a BETA preprocessor directive, which is used to control access to certain prerelease game content. Attempting to add/remove that directive via Project Settings > Player > Scripting Define Symbols just straight up does not work unless I restart Unity. I know this because I have a build preprocessor script, which uses the directive to move some files around, and it logs what it's doing. And while the Project Settings dialog may claim that the symbol is/is not set, the logs will say otherwise.

I feel like this has to be some kind of disconnect between Unity and the compiler? I'm using Visual Studio 2019 (version 16.11.16) for this. Any suggestions for steps I might take to fix the issue? It's mildly annoying in that it adds extra steps to the build process. Mostly I'm worried that I'll accidentally send prerelease content into the real world ahead of schedule.

Adbot
ADBOT LOVES YOU

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

TooMuchAbstraction posted:

OK, this is a weird issue: I have to restart Unity to get it to actually pick up changes to my preprocessor. I have a BETA preprocessor directive, which is used to control access to certain prerelease game content. Attempting to add/remove that directive via Project Settings > Player > Scripting Define Symbols just straight up does not work unless I restart Unity. I know this because I have a build preprocessor script, which uses the directive to move some files around, and it logs what it's doing. And while the Project Settings dialog may claim that the symbol is/is not set, the logs will say otherwise.

I feel like this has to be some kind of disconnect between Unity and the compiler? I'm using Visual Studio 2019 (version 16.11.16) for this. Any suggestions for steps I might take to fix the issue? It's mildly annoying in that it adds extra steps to the build process. Mostly I'm worried that I'll accidentally send prerelease content into the real world ahead of schedule.

Can you get away with reloading assets instead of reloading? Otherwise I generally recommend setting up a build pipeline that doesn't include you going through the build menu in unity on your dev machine.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply