|
Nude posted:https://www.youtube.com/watch?v=j8N-c8H0pgw -Texture memory savings: 3d skeletal animations, even with all the parts-swapping they're doing, are a whole lot more compact than the corresponding HD sprite sheets, even tiled and compressed to hell and back, and the animations themselves are held on conventional memory. Arcsys devs were constantly struggling with video memory limitations for their PS3 games, removing frames from old moves to include new ones. Skullgirls gets around that by constantly decompressing and streaming the needed sprites from system to video memory to get around this and required very clever coding and constant optimization. -Resolution independence: Now you're playing with vector data, all you need to up the resolution is up the resolution. People using Gedosato to make 4k shots of Xrd have produced nothing but breathtaking results. -Changeable outfits: All the animation data is separate from the graphics and even characters that have swappable parts don't have that many compared to the equivalent sprite sheet. So as long as you make new parts that conform to the same skeleton you can go wild. Currently seen in the game in two instances: one of the characters getting a different outfit in the newest version, and another who has a superpowered form in which all his moves gain new properties: in the 2d games that was just his normal sprite blinking red, in Xrd he's got an entirely different design that shares the same animations. But of course the real benefit of all that is gonna be future paid DLC outfits.
|
# ¿ Jan 25, 2016 14:36 |
|
|
# ¿ May 21, 2024 16:40 |
|
Haledjian posted:I have a feeling that if I had any kind of maths/programming background I could probably figure out a way to do it in the Unreal material editor (since it lets you do pretty much whatever with UVs). But unfortunately I think it's beyond me, haha (or at least would sidetrack me way too much from getting the main systems stood up). Holding out hope that I can get someone to help me out with it in the future at some point.
|
# ¿ Feb 13, 2018 15:58 |
|
That was the origin story of FNAF, yeah.
|
# ¿ May 2, 2018 17:43 |
|
Hammer Bro. posted:I might be misconstruing but it seems like if people commonly recommend some extension for some basic functionality then I'd expect that the implementation of that functionality leaves something to be desired.
|
# ¿ Jun 20, 2018 23:04 |
|
It's worth noting that Dispose() you've been calling in XNA isn't a destructor at all. It's just, well, Dispose(). There are no destructors in c#, for that matter, closest thing is finalizers and they can't be called explicitly, only the garbage collector can call them. But yeah, remove all outstanding references and your object will be collected, eventually.
Chev fucked around with this message at 11:49 on Jun 27, 2018 |
# ¿ Jun 27, 2018 11:44 |
|
The first two Siren games did it. It was suitably creepy.
|
# ¿ Jul 3, 2018 13:35 |
|
Kassoon posted:Also a lot of engines poo poo themselves when they get too far away from 0,0 so keeping the player/camera centered there and moving the universe around them is fairly sensible.
|
# ¿ Aug 24, 2018 15:32 |
|
The levels thing is especially easy with procgen. That's how there were 500 levels in Populous, then they reversed the seed and got 500 more for the expansion pack, and the extra galaxies in NMS are similarly about feeding different seeds to the generator.
|
# ¿ Nov 27, 2018 02:08 |
|
I feel having an environment that makes it easy to rename classes really helps because even if a name is stupid when you come up with it down along the line you can rename the class once you have a better idea of what it does (like, my shader manager is really a shader cache because all it does is load and cache shaders).
|
# ¿ Jan 7, 2019 20:39 |
|
Bert of the Forest posted:resolved by changing the "Transparency Sort Mode" in Graphics settings to be Orthographic instead of Perspective, even though I'm still using a perspective camera.[/url] And in the one-point perspective used by every game camera ever, billboards are usually not aligned towards the camera's position but towards the camera plane, so in turn the distance to the camera plane should be used.
|
# ¿ Mar 2, 2019 21:13 |
|
Your Computer posted:I know this isn't super interesting and it's still baby steps, but I feel like I'm starting to understand at least a bit more about shaders I learned about using rendertextures with shaders to make effects, and I can already imagine some of the effects I want to implement! Other than that I added some more stuff to my shader like emission (self-lighting?) with support for a texture to make parts of a model less/unaffected by lighting. None of this is actually a game but.. it's related, right?
|
# ¿ Mar 19, 2019 20:47 |
|
The lower-level way it's done, in xaudio with sharpdx for example, is that you can chain sound buffers in a given source voice (the object that plays sounds), so you just submit both buffers in order and either tell it to loop the second one or re-submit it in a loop. That's also how compressed stuff like Ogg works, you decompress only enough to fill a couple of small fixed-size buffers and whenever one's been consumed you fill it with the next bit of uncompressed data and append it again. That second bit is encapsulated in Unity's "compressed in memory" and "streaming" load types for audio clips. Anyway, yeah, I'm rather surprised Unity doesn't have a "playAppended" method or something in addition to playScheduled, would make all that much easier.
|
# ¿ Jun 9, 2019 00:22 |
|
The way perspective transforms are implemented in real-time rendering completely messes up z-buffer precision (instead of encoding z you're encoding D = a * (1/z)+b, which isn't linear at all) in exchange for being able to transform your objects with a simple matrix multiplication, so you lose precision way faster than you'd think, thus z-fighting. Ways around it for road markings: -There's a bias parameter in rendering APIs that should allow you to offset polygons by a somewhat constant post-encoding amount, or at least that's the intention, but in practices no two GPUs implement it the same way so it's likely to disappoint you. Worth a try, though! -Put the line in the road texture! Or directly in the road shader if you want that sharp look. Anyway, the idea is that if you draw the line directly as part of the road polygon, there's no z-fighting issue. Either in a single pass or with a second overlay pass that uses the same polygons and the equality depth test. Anyway, that's probably the best option. -Deferred decals! If your renderer is deferred it's a reasonably handy way to do it. -Per-pixel depth shading! Shading languages allow you to play with the depth writes themselves, at the price of losing early-z shader optimizations, so you can do stuff like logarithmic depth buffers, increasing z-buffer precision to planetary scale. Not really worth the hassle if you don't need planetary scale though. More simply, you can use it to implement a linear z bias yourself, but that'll still be subject to precision loss, although I think a big enough math nerd could minimize it. tl;dr use textures for lines. Chev fucked around with this message at 00:50 on Jun 19, 2019 |
# ¿ Jun 18, 2019 19:04 |
|
Just a correction there regarding delegates, so you know: an anonymous function can be used where a delegate's expected but a delegate isn't an anonymous function. In C# the name for an anonymous function is, well, an anonymous function (of which there are two types, anonymous methods and lambda expressions). Instead a delegate's basically a function pointer, and so it can point to an anonymous function just as well as a named function, and isn't either of those, just like a reference isn't an object.
|
# ¿ Jul 18, 2019 00:41 |
|
The delegate's the type, yes. But it's defined like this: delegate int somefunction(int x, int y). That's the signature. The thing you've written, (int, int) => ..., is a lambda expression, basically a block of inline code treated as an object with a list of incoming parameters, a possible value to a delegate's type. Note that it's not a signature, and instead the provided block will have to match the delegate's signature defined earlier; notably, the return type is entirely implicit, based on the code inside the block (so it doesn't actually work in the (int, int) => (int) way, that last parenthesis isn't actually a parenthesis and can be any block of code, with as many lines as you want). So if I take two lines from your doc link: delegate void TestDelegate(string s); [...] TestDelegate testDelC = (x) => { Console.WriteLine(x); }; That first line defines the TestDelegate type, the delegate itself. Then TestDelegate testDelC declares a variable that has that delegate type, and (x) => { Console.WriteLine(x); } is the value assigned (via the same assignment operator as any variable) to that variable, in this specific case a lambda expression. Note how the lambda's parameter list actually isn't explicitly typed at all when you write it. You only know x is a string because it'll be matched against the signature of the delegate that defines the variable you assign it to, just like a method's parameter list is matched against the method's signature. Chev fucked around with this message at 01:48 on Jul 18, 2019 |
# ¿ Jul 18, 2019 01:23 |
|
TooMuchAbstraction posted:The thing about quaternions is that while, no, you don't have to understand how they work to use them, you do have to understand how to use them. You have a limited set of tools available, and it can be tricky to translate your desired behavior into things those tools can accomplish.
|
# ¿ Jul 21, 2019 11:09 |
|
Ultimately it's the same problem as binding to a game controller that may not be always connected, you need to provide an alternative. Having some reserved keys always work in menus as a substitute can be one, but there's also the option of having a specific key or start option dedicated to emergency rebinding.
|
# ¿ Jul 25, 2019 15:08 |
|
Being able to change/delete the keybind file is good, but it's worth keeping in mind many users, especially the kind that'd bind themselves into a dead end, will have no idea where said file can be found.
|
# ¿ Jul 26, 2019 02:31 |
|
I mean, even outside of that specific aspect bindings should be, well, bound only to a specific controller and recalled when that controller's plugged in.
|
# ¿ Jul 27, 2019 02:46 |
|
In the original L-system-for-cities paper, the title of which I forget, the idea was that when you add a new road segment you scan a given circular area around the segment's endpoint (and if your new segment crosses an existing one the endpoint is the intersection point) and if you find one or more existing vertices within that radius you snap your endpoint to the closest or otherwise more appropriate vertex.
|
# ¿ Jul 28, 2019 18:06 |
|
TooMuchAbstraction posted:To be honest I personally find 3D art to be easier to do a borderline-acceptable quality level at than 2D. Having to hand-draw each individual frame scares me because I don't think there's any way I could keep a character on-model for all of their animations. If that's any comfort some companies invest a lot of R&D in getting their 3D characters to become off-model on purpose, to animate better.
|
# ¿ Aug 21, 2019 01:22 |
|
Functionally, the animation data in Blender is bound through bone names, so technically as long as the control rigs are the same you should even be able to link them to the same animation data if you want a preview of all the variants in Blender.
Chev fucked around with this message at 12:31 on Sep 8, 2019 |
# ¿ Sep 8, 2019 12:13 |
|
You also need to be careful with how you're scaling it. If you're just applying a scale transform to the object rather than the mesh, that's likely to trip up a lot of engines or exporters and mess with things like dynamic parenting. To be foolproof at the end of the day your object should be the "right" scale when you clear all its transforms. Regarding problems with stretchy bones, several exporters or engines used to have a problem with non-uniform scaling (ie different scaling on different axes), simply because they were encoding scaling as a single factor, but I think that's solved nowadays. That being said, when I saw the mention of stretchy bones, I misread that as blender's bendy bones and that's another problem entirely, those may not be supported in unity.
|
# ¿ Sep 10, 2019 13:31 |
|
Your Computer posted:shot in the dark here; do any y'all wizards have any idea how one would go about implementing custom texture filtering in Unity's Shader Graph? Given texture dimensions (width, height) and coordinates (u, v), floor(u*width) and floor(v*height) give you texel position (x, y) (divide it by width/height to get the uv of that texel), and fract(u*width) and fract(v*height) give you the interpolation factors from that texel to the neighbours (x+1, y,) (x, y+1) and, combined, (x+1, y+1). Just sample those four texels with point filtering and implement your own interpolation. You can do all that with nodes. It's not unity but blender and don't pay too much attention to the node tree since I grouped some of them and it's super small anyway, but here's a test implementing n64-style 3-point filtering I made a couple days ago, just to show it's doable: EDIT: it's a bit too late tonight but if that can help I can provide an annotated version of the blender file tomorrow. Chev fucked around with this message at 01:57 on Sep 12, 2019 |
# ¿ Sep 12, 2019 01:45 |
|
Aw yeah, seems you've figured it out! Didn't realize you had access to a node that can integrate custom code. But yeah, to explain a bit: -Your fundamental mental obstacle was that you can't access texture data without a sampler state. A texture sample always has a sampler state and a texture, it is by definition the combination of those things. -Yeah, as mentioned, when doing your own filtering you need to take several samples per pixel, as many as you need for your own sampling function (4 for bilinear, 16 for bicubic, etc). So for bilinear you'd take samples T00, T01, T10, T11 to have the whole neighborhood then do A = lerp(T00, T01, dx), B = lerp(T10, T11, dx) and finally lerp(A, B, dy) to get your filtered color. -To reduce the number of samples you need you can still use the existing samplers. Like, whenever you need a weighted average of four texels you could use a bilinear sampler even if your final filtering isn't bilinear. Some of them fancy modern antialiasing, FXAA and onwards, put that to good use. For N64 filtering, the paradox is that even though you're emulating 3 point filtering, on modern hardware you still need 4 samples, so what was a performance measure originally is now making the shader a tiny bit more complex. That's just how emulating things go I guess. That's because the pixel shader is going to sample all possible necessary locations. I mean, technically, the shader still can have conditions but that's what it'll do behind the scenes. In the bit of code you found, the condition part is one through the "step" function, which is "if a < b return 0 else return 1". So that code computes the possible colors for both 3-point pairs in a texel neighborhood and then weights them using step, effectivement cooshing between them.
|
# ¿ Sep 12, 2019 10:30 |
|
Zaphod42 posted:You should be able to render to texture and then use that texture without updating it. That should be super cheap. The only challenge is making it tile smoothly on the boarder.
|
# ¿ Sep 23, 2019 20:14 |
|
Your Computer posted:so I've started throwing some stuff together to see if I can use ProBuilder/Polybrush for level design and I've run into an annoyance - the mipmapping is kicking in right in front of the player, and since I'm using my own filtering instead of trilinear filtering, there's a sharp, flickery line between each mipmap level, shown here and most easily seen on the grass: Just use top level for now (should be 0) but for a future better implementation you'll have to figure out, likely based on screen space derivatives (ddx and ddy in hlsl), which two mipmap levels of the texture you need, do the triangle interpolation thing for both levels, then blend between them (can't remember if N64 did blender between mipmap levels). EDIT: looking at this https://forum.unity.com/threads/calculate-used-mipmap-level-ddx-ddy.255237/ you even have a function for computing the desired LoD. Then turn it into fractional and integer parts to get the base mipmap and blend factor. Chev fucked around with this message at 01:26 on Sep 26, 2019 |
# ¿ Sep 26, 2019 00:40 |
|
Your Computer posted:point filtering by necessity, since I'm doing my own texture sampling in code. That said, the import settings for filtering are discarded anyway when using the shader since it supplies its own sampler state. Right. Too much info at once. Lemme try again. That visible fringe, the transition between mipmap levels is just something you're gonna get if you use mipmaps but not trilinear or anisotropic filtering, short of disabling mipmapping which will introduce aliasing artifacts instead. It's worth noting an amusing way to mitigate it somewhat, along with aliasing artifacts, is mentioend in what I've seen of the N64 sdk's manual: blur the texture in the first place as it'll make all those things less jarring. That's right, not only were N64 textures blurry due to memory limitations, the SDK was advising people to make them extra blurry. That's just delightful, in a way. As for where those thresholds lie, it's based on the screen space derivatives of the UVs. To put it simply, the GPU will determine how zoomed out the texture is in screen space and choose a mip level where the texture's texels are wider or at same width as the screen's pixels, so that there is no aliasing (which happens when you jump over several texels as you go from one pixel to its neighbor). That does mean that the mipmap transitions depend on render target resolution, so unless you fiddle with the numbers you'll get better detail that an actual N64 would give unless you match the output resolution (not that it really matters unless you're specifically trying to fake or emulate a N64's output to pixel perfection) Anyway, the CalculateLevelOfDetail thing (which certainly has an equivalent Unity macro) is there precisely because you don't need to know how that work but you may need the calculated value. Just pass the UVs to it and it'll return the mip level for the current pixel, but as a float. That is to say, if it returns 1.5 as the mipmap level, it means it's judged it to be halfway between level 1 and 2, so if you're not using trilinear filtering, just rounding it (well, using the floor or ceiling operator, can't remember which) should be fine. --- Now, I'm bothering you with all that because the 3-point filtering we discussed earlier has a bit of a flaw (common to all custom texture sampling schemes): it uses the pixel dimensions of the texture, or rather of the mipmap level you're sampling. If you sample a 64x64 or 32x32 textue with samples spaced for a 128x128 one, you don't get the right sampling anymore. You can see it happening in your splish splash video: the further mipmap levels have squares instead of the triangular interpolation, so at the very least when you made that one you weren't aware of the problem (although your lod comparison screenshot seems to do it correctly). I don't know if the texel size node in your shader node tree screenshots takes mipmaps into account but if it doesn't, you need to adjust your texel size values based on the current mipmap level. So the idea is to retrieve the mipmap level with CalculateLevelOfDetail, then your actual texel size is original_texture_texel_size * ceiling(2 to the power of rounded mip_level) if we follow standard mipmapping dimensions. Or, indeed, just not using mipmapping will get rid of the artifact. Chev fucked around with this message at 17:48 on Sep 26, 2019 |
# ¿ Sep 26, 2019 17:44 |
|
I know they know. But if they want to move it they need to know why, and beyond that as the remainder of my post mentions the N64-style interpolation is busted mipmap-wise as it is. EDIT: Zaphod42 posted:Makes sense its automatic based on resolution, but you should still be able to force certain LOD levels manually? For them, the best option is to get the value from calculateLod, manipulate it in some way (for example by adding/substracting a constant to/from it), then pass the altered value to SampleLod. Technically there's a SampleBias function that should do just that, but like some other bias functions it's poorly documented (in fact its documentation is partially wrong) and without testing I can't guarantee it'll behave the same on all hardware without some extensive testing. It is a bit of a fool's errand because the mip thresholds will move depending on resolution anyway. You could also use a fixed resolution but as I said that could be pushing the N64 emulation too far. Using distance or some other alternative to derivatives or calculateLod to choose the mipmap level is just gonna introduce artifacts because hey, that's not how mipmapping works. Chev fucked around with this message at 18:36 on Sep 26, 2019 |
# ¿ Sep 26, 2019 17:59 |
|
Alright! Well, there's several versions of the shader language(s) AKA shader models. Your Shader Graph is set to output shader code for pixel shader model 4.0 and CalculateLevelOfDetail/CALCULATE_TEXTURE2D_LOD is a pixel shader model 4.1 thing. There's probably a setting somewhere in Shader Graph where you can change that, otherwise you'd need to write the LoD calculation yourself, something I hope we can avoid (EDIT: well, technically it's like five simple lines of code, we'll just have to dig up a couple unity macros). EDIT: V V V Awesome Chev fucked around with this message at 19:09 on Sep 26, 2019 |
# ¿ Sep 26, 2019 18:42 |
|
Your Computer posted:see this is the kind of stuff I literally could never figure out on my own
|
# ¿ Sep 26, 2019 22:05 |
|
Your Computer posted:now, is this something anyone would even notice? no. Congrats on making it work!
|
# ¿ Sep 28, 2019 00:42 |
|
Your Computer posted:It turned out that CalculateLevelOfDetail returns an already floored value so I had to calculate the mipmap level manually, but since you already mentioned the ddx/ddy stuff I thankfully knew what to google. Not gonna lie the function is like black magic to me, but it works and I get the same lod value (+ fraction!) so I'm happy The ddx and ddy functions are easily some of the trickiest functions to understand in shader languages, because they kinda break the usual rules. Normally in a pixel shader you cannot access data from a neighboring pixel, you always work on a pixel shader in isolation and if you want a neighboring value you have to get it from a texture you rendered earlier. The ddx and ddy functions are the exceptions to that, they give you the difference in a given value between your pixel and an adjacent one in the horizontal (ddx) or vertical (ddy) direction. There are some quirks to it so you can't use it for, say, edge detection or anything like that, it's specifically intended for getting derivatives, ie rate of change of values over a single polygon's surface. When measuring that on UVs, the length of ddx or ddy (whichever is biggest) gives you the maximum rate at which your texture coordinate is changing from the current pixel to its neighbors. The idea of mipmapping is that rate of change needs to be less than the texel size of the current mipmap. If it is bigger, it'll "miss" texels in-between two pixels, the phenomenon called aliasing which we seek to avoid by using mipmaps. Since we know the formula to get the mipmap index from the texel size, we just apply the same formula to that rate of change to get the desired mipmap level or combination thereof. There, that was most of the details you don't need to know. Chev fucked around with this message at 12:37 on Sep 28, 2019 |
# ¿ Sep 28, 2019 12:26 |
|
Your Computer posted:The lighting is completely messed up, and I can't figure out why. Turning or moving the camera makes objects just stop receiving lights at random:
|
# ¿ Sep 30, 2019 09:04 |
|
In earlier games you'd basically have a jump up frame and a fall frame, then that was extended to an animation chain of jump up-> post jump up loop->[when vertical speed < 0] start falling->fall loop (still used in many games today, it's a good chain) but there''s some really fancy stuff that can be done with blend trees. Although it's kinda amusing because when they were popularized people would start blending and layering tons of stuff but in recent years there's been work to design blend trees in smarter ways to use fewer base animations and reduce the art load.
Chev fucked around with this message at 08:55 on Oct 8, 2019 |
# ¿ Oct 8, 2019 08:50 |
|
I mean, they still are, because that's how 2D's been done in general in our post-DX7 world where rasterizers have taken the role of blitters. Unless you've been making your own pixel-by-pixel drawing routines, I guess.
|
# ¿ Oct 11, 2019 09:10 |
|
Ghetto SuperCzar posted:When a sort order becomes negative it just stops showing up I guess. I solved it by just adding a huge number to all of the calculations :-/ It's quite likely that the sort order is just handled as z depth internally, so naturally anything with a negative depth would be on the wrong side of the camera's near plane and get culled.
|
# ¿ Oct 12, 2019 00:58 |
|
Imhotep posted:Also, it's really interesting how Blender seemingly can't achieve that look, like, I don't know, I guess it's just my total lack of experience with visual art in general, but it's bizarre to me that that's not easily achievable in Blender, let alone maybe even too difficult to achieve that it's not worth attempting. We've kinda touched upon this with the whole exchange between me and Your Computer about N64 filtering earlier, where they had to go from standard bilinear filtering, which requires slapping down a simple texture lookup node, to understanding what samplers are and how they interact with textures, to combining a number of math nodes or operations to reproduce the N64 three-point lookup on top of what is actually a 4 point lookup, to learning about screen space partial derivatives and the role they play in mipmapping, just so they could reproduce the silly cost-saving filtering that gives textures on that hardware their particular look and feel. To top it off, even if you've got a leaked SDK the filtering is called bilinear in the docs, so unless you find the right page that admits it's not actually bilinear you've first got to look at actual renders from back then really hard until you get what's going on. I myself have gone through a similar process before when replicating PS1-style graphics (which are a bit more involved than just affine texturing). So, essentially, to reproduce that kind of renderer feel you first need to know exactly what that feel actually is, in the strict cold mathematical sense. You need to determine the shading equations that were used, the color space (cause nowadays 3d software and hardware tend to default to what's called a linear color space, necessary for good shader math, but that's kind of a post-2012 thing, before they were using gamma space which meant the shading was mathematically wrong but they didn't care and the resulting feel was different), the post-processing (very likely dithered) and color depth (256 colors?), the bump mapping algorithm used (surprise surprise, normal mapping was first introduced to the world at Siggraph 1998, one month after Banjo-Kazooie came out, so it cannot have been normal mapping that was used in rendering their stuff). A lot of it you can infer if you know what software was used and at what point in computer graphics history it was made but anyway, you've got to be a pretty technical person with dubious hobbies, or know one, or have access to the blog of one (everyone and their uncle learned about N64 filtering from the same blog post), and then also find out how to reproduce the process in your modern software of choice, and also which are really important (the normal mapping thing maybe isn't). Or, the alternative, have access to the 3d program that was used in the first place and just use that. In fact it cracks me up (but heartwarmingly) that there are people devoted to preserving old dev tools for exactly that kind of purpose. In our specific case, if someone were to reproduce that look in Blender, the starting point would be knowing what precise version of 3DS max produces the right results, and know what the shaders used are called in it and a couple material settings with accompanying renderings as spheres that can be used as reference, including a bog standard white boring sphere with and without specular, lit from a single light, from which color space and other niceties could be inferred. Then with a bit of information hunting the right shading equations can likely be found and implemented as reusable node groups for blender, along with rendering settings. TL;DR What I'm saying is you need to be not only a big nerd, but the biggest nerd. Chev fucked around with this message at 13:46 on Oct 13, 2019 |
# ¿ Oct 13, 2019 13:18 |
|
Omi no Kami posted:
If you're in Blender just link your character and whatever other reference you need to each file that needs them.
|
# ¿ Oct 18, 2019 18:02 |
|
|
# ¿ May 21, 2024 16:40 |
|
Your Computer posted:something similar was suggested with the doublejump (of "leaving behind" parts of the rig) but I simply can't figure out a way to actually feasibly do that. I could animate the legs to go downwards like you're suggesting but it would have to be at the exact same velocity as the player is moving up, and the only way that would look right is if it's synced up exactly with the jump code.
|
# ¿ Oct 26, 2019 02:14 |