|
Mata posted:Has anyone written a hardware skinning instancing shader? Using a texture should work fine.
|
# ¿ Oct 4, 2012 15:56 |
|
|
# ¿ May 16, 2024 01:51 |
|
UraniumAnchor posted:This isn't strictly 3D related, but it has to do with textures so it might belong here. If you're trying to compute a total PNSR value for some reason, then it might be helpful to know what this is for since that sounds like a bad idea, but you'd have to weight the alpha error and what value you choose for that is totally arbitrary. Other things to keep in mind may be that perceptual error is greater between differences in dark intensities than bright intensities (distortion computations are often generally done in gamma space), and high-frequency hue/saturation distortion is significantly less perceptible than high-frequency intensity distortion. OneEightHundred fucked around with this message at 19:37 on Oct 19, 2012 |
# ¿ Oct 19, 2012 19:32 |
|
Boz0r posted:EDIT: Now that I think about it, do I even need to change anything other than giving it the other array? quote:DOUBLE EDIT: What if I have the normals in a separate array, how do I add them? The main key thing to realize is that VertexAttribPointer, DrawElements, etc. operate on the currently bound buffer. If you have stuff stored in a different buffer, bind the new buffer and use VertexAttribPointer to make that attribute use the other buffer. If you have it interleaved in the same buffer, pass the appropriate offset to VertexAttribPointer and use the same stride. i.e. code:
OneEightHundred fucked around with this message at 04:36 on Nov 12, 2012 |
# ¿ Nov 12, 2012 04:26 |
|
Boz0r posted:I'm reading this GPU Gems article on shadow maps, and in the section on Percentage-Closer Filtering they say that shadow maps cannot be prefiltered. Why is this? (In case you're wondering why shadows aren't completely hard-edged from that, it's because modern GPUs will automatically filter shadow map lookups at a given depth, but the filtering only goes as far as emulating linear filtering) Variance shadow maps CAN be filtered and anti-aliased, but they suffer from artifacts in certain scenarios.
|
# ¿ Jan 3, 2013 17:02 |
|
Boz0r posted:Can someone explain what homogeneous division is? I've tried reading up on it, but I'm still not sure what it actually is. The reason this is done is because the division allows the hardware to determine how to handle various aspects that can't be done with just 2D coordinates, like perspective correction, handling screen-edge clipping for coordinates that cross the W=0 plane, etc. The depth is divided because doing that results in a non-linear depth value distribution that concentrates more depth precision close to the camera.
|
# ¿ Jan 4, 2013 20:23 |
|
Heliotic posted:Also, I have absolutely no idea what glUniform1i(uniforms[UNIFORM_TEXTURE], 0); does. Doesn't GL_TEXTURE0 activate the first texture register? Why do we need to pass '0' to the uniform as well?
|
# ¿ Feb 17, 2013 09:50 |
|
Win8 Hetro Experie posted:What does context being lost in OpenGL mean and how should it be handled? How should resources like buffers, shaders, programs and textures be managed in general?
|
# ¿ Feb 18, 2013 18:24 |
|
Win8 Hetro Experie posted:I guess this means going back to square one and starting again from setting up the display mode and pixel format. Is there any harm or defensive programming advantage in calling delete buffers etc. with the old buffer names before allocating new ones? If you are using D3D, then a lost context will still return memory regions and whatnot when you attempt to map resources, but they'll be from a dummy allocation system that throws it out when you unmap. Probably the best thing to do is just program as usual, but have a mechanism for doing a drop/reload of all resources.
|
# ¿ Feb 23, 2013 18:31 |
|
gooby on rails posted:Incidentally, this is why a lot of older PC games are so unstable when alt-tabbing- they weren't written defensively enough to catch the invalidation at literally any point in the program. If your app can support manually dropping all resources and reloading them while you're playing, then you should be able to survive a context loss without problem, since the D3D API should still be giving you responses that will at least avoid a crash until then. The best practice is probably to only reference D3D resources via a resource manager that can do a centralized drop/reload. Android dropping the context in certain situations is documented, intended behavior. OneEightHundred fucked around with this message at 19:47 on Feb 23, 2013 |
# ¿ Feb 23, 2013 19:26 |
|
Win8 Hetro Experie posted:Though it's a bit odd that some of the issues mentioned, like something hogging the GPU or the driver being updated must surely apply in desktop OpenGL as well. Apparently the ARB caved in and as of 2010, you can specify the "reset strategy" when you create a context which determines which of the two behaviors is used, one of which is to lose the context, and the "preserve everything" approach got downgraded to a recommendation.
|
# ¿ Feb 24, 2013 01:22 |
|
Is there a proper (or any) way to drop the upper mipmaps from a texture in OpenGL? That is, I want to be able to drop the more detailed mipmap levels of textures that are only visible far away and stream them back in if they become visible again. I know GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL define the valid range of mipmaps, but does changing them cause mipmaps outside of the defined range to be discarded, or are they required to be preserved in the event that the range changes back?
|
# ¿ Mar 10, 2013 22:25 |
|
Xerophyte posted:Constructing a uniform sampling over a sphere or hemisphere isn't entirely trivial. code:
code:
Most algorithms that use Monte Carlo sampling have some other way to distribute the error out that makes it less noticeable, i.e. the reason photon mapping has separate shoot and gather phases is that either by itself would produce a very noisy result with localized discontinuities, but doing both causes discontinuities to be distributed to the point that they're not noticeable. OneEightHundred fucked around with this message at 20:36 on Mar 19, 2013 |
# ¿ Mar 19, 2013 20:28 |
|
Schmerm posted:Now, BindTexture calls are effectively replaced with lots and lots of glUniform calls to tell my fragment shader's sampler2D objects which texture unit to sample from (I am using GL 2.1 and GLSL 1.20, by the way). Generally, changing anything about what resources data is retrieved from and how they're used (i.e. shaders) is pretty expensive. Spamming out draw calls from the same bound resources while not changing anything is cheap (in D3D10 and OpenGL at least, D3D9 not so much).
|
# ¿ Mar 30, 2013 18:22 |
|
Suspicious Dish posted:Well, multitexturing has to work somehow, so there are different texture units on the card itself. Rebinding samplers is still changing how the GPU is going to access the texture data though, so it's probably going to incur similar costs.
|
# ¿ Mar 31, 2013 02:55 |
|
Mata posted:Is skinned instancing (as described in this pdf) a DX10 technique? The paper doesn't describe why it can't be implemented in DX9. Writing the bone matrix data to a texture shouldn't be a problem, and I don't see why reading it would be a problem either. And that's pretty much the core of this technique...
|
# ¿ Apr 8, 2013 20:10 |
|
Hubis posted:The point of that article was to give an example of how to use instancing (a DX10 feature) to render a huge amount of skinned geometry effectively. You can issue one Draw call, with one model's worth of vertices and a texture containing all of the skinning data, rather than render rendering 10,000 models one at a time or any other much more cumbersome approaches. The core idea (or the one you're referring to at least) should be portable to DX9, you're right -- and in fact is an approach that's been used in a few DX9 engines.
|
# ¿ Apr 9, 2013 21:36 |
|
Suspicious Dish posted:This might be a bit off-topic, but is there a standard model interchange format that can be imported/exported from most 3D environments that has complex materials support, almost like custom shaders?
|
# ¿ Apr 10, 2013 05:56 |
|
Twernmilt posted:The outer edge of the outline is anti-aliased, presumably because it represents the edge of the polygon. The inner edge, however, is not anti-aliased and it looks terrible. Is there some way to tell OpenGL to anti-alias every fragment that gets rendered by a particular shader or specify which fragments to anti-alias? Do I simply have to use geometry to get something anti-aliased? If you want anti-aliasing on fragment effects, you either need to use some sort of fake AA (i.e. FXAA), or store non-linear functions in textures so that anisotropic filtering can take care of rapid shifts in value.
|
# ¿ Apr 27, 2013 08:29 |
|
Twernmilt posted:I'm sort of using it as an excuse to learn how to render to a texture. What do you mean by "high frequency fragment effects" exactly? Aliasing itself is a high-frequency phenomenon, caused by changes happening in a tighter spatial interval than the pixels on your screen. Because MSAA only computes pixel values once for each screen pixel and only does supersampling to prevent aliasing from coverage, pixel shaders that cause abrupt changes in the final value over a very small area on the surface of a polygon (like an edge-blackening shader might), you'll get aliasing. Some other things like reflections/specular (especially on bumpy surfaces) are also prone to aliasing because the highlight is so small, which screen-based AA filters like FXAA can resolve.
|
# ¿ Apr 29, 2013 06:40 |
|
So, one thing's really been bugging me about heightmap terrain. Generally, the ideal size of a terrain heightmap is a power of 2 + 1 height samples per axis, because then performing LOD on it is just a matter of collapsing it to half resolution, and it can be partially collapsed. How do you align textures with that though (i.e. alpha masks for terrain features), since those generally need to be (or perform much better if they're) a power of 2 on each axis instead? OneEightHundred fucked around with this message at 00:07 on Jan 1, 2014 |
# ¿ Jan 1, 2014 00:03 |
|
Hubis posted:Second, it doesn't matter because your geometry is at (2^N + 1) but your textures can still just be (2^M). If M = N, you'll end up with one texel per "square" in the geometry, since the heightmap describes the edges of the patches. Draw it out and you'll see what I mean. What I'd like is for the terrain texels to be centered on the mesh points, but doing that requires a 2^N+1 texture.
|
# ¿ Jan 2, 2014 07:43 |
|
Grocer Goodwill posted:Though, this is less relevant on modern scalar-only hardware. Also "mul" doesn't actually exist in glsl so you're free to just #define mul(a,b) ((b) * (a))
|
# ¿ Jan 5, 2014 04:22 |
|
PNG for lossless, JPEG for disk size, DDS if you need DXT compression. If you're doing one of the first two in C/C++, use FreeImage. If you really need to roll your own loader for non-compressed images for some reason, use TGA and DDS. OneEightHundred fucked around with this message at 05:14 on Jan 8, 2014 |
# ¿ Jan 8, 2014 04:59 |
|
Spite posted:EDIT: I may be misremembering NI, which I think is purely vector all the time. Is that what you meant? In terms of GPGPU they'd have to vectorize everything.
|
# ¿ Jan 8, 2014 07:37 |
|
That reminds me: How is photon mapping supposed to work with complex geometry? Specifically, how do you handle collection in a way that both avoids the noise of small polygons getting hit by single-digit numbers of photons (or zero), without bleeding through solid surfaces? Or is the shoot phase just supposed to be really inaccurate?
|
# ¿ Jan 31, 2014 08:42 |
|
lord funk posted:The entire internet needs a filter based on the version of OpenGL you actually want to learn about. I remember what a nightmare it was to teach myself OpenGL ES 2.0, and all the answers I found were 1.x.
|
# ¿ Mar 20, 2014 03:53 |
|
Colonel J posted:Has anyone ever toyed around with Ramamoorthi's article An Efficient Representation for Irradiance Environment Maps ? I have a radiance probe I want to convert to irradiance with the spherical harmonics technique; I've started writing a Python program to implement the technique in the article but I'm pretty sure I'm doing it the wrong thing. If anybody had advice or even an example of implementation I'd be forever grateful! A word of caution though, the "1% error" thing is a deceptive. It's very low-error when you're integrating a shitload of mostly-similar sources from a large number of sources, like an omnidirectional environment probe or GI, because the error gets flattened out by the monotony of the environment itself. It it NOT low-error when dealing with sources that come from a narrow angle range, or in the worst case, a single direction. It can suffer from ring artifacts in those cases due to the intensity not being slightly positive at what should be the horizon, then going negative, and then going slightly positive again on the opposite side of the object. In other words, it's good for an indirect lighting probe, it's not very well-suited to integrating primary light sources.
|
# ¿ Jul 21, 2014 03:45 |
|
.ma files are ASCII and have a spec (http://download.autodesk.com/us/may...umber=d0e677725). I think it has the same flavor of problems that FBX/COLLADA can have though: Basically everything you want is buried under multiple layers of functionality, it takes a good amount of work to dig the data you want out, and you have to convert anything that's in a format that you can't handle (which can potentially be very difficult). I don't know if .ma is any worse in that respect than FBX/COLLADA, but at least with FBX/COLLADA there are third-party converters already.
|
# ¿ Nov 13, 2014 04:29 |
|
D3D11's documentation re: usage modes is confusing me, especially about whether it's possible to write to a default-usage buffer from the CPU (the usage docs seem to suggest it isn't, but the UpdateSubresource docs suggest that it is). Is there an efficient way to set up something so that I can fill a structured buffer with a bunch of data with the CPU, then have a compute shader do some work on it that transforms it in-place, then use it as input to a later draw call? Or do I need to create a dynamic buffer for the CPU-written stuff and a default-usage buffer for the output? (I'm trying to have a compute shader do IDCT and doing it in-place will use half as much memory if it's possible, basically.) OneEightHundred fucked around with this message at 04:22 on Apr 18, 2015 |
# ¿ Apr 18, 2015 04:18 |
|
Is it legal to alias the outputs of a compute shader as a different type in another shader? Like, if I have a compute shader output to a structured buffer that it thinks contains this: code:
code:
|
# ¿ Apr 27, 2015 07:46 |
|
How does photon mapping typically deal with off-manifold areas in the final gather phase? Like, if you're sampling a point that's at the edge of a surface, half of the sample area is going to be off the surface where photons can't actually hit, which would make the result artificially dark. Do you have to compute it as the area as that's actually hittable (i.e. by clipping the sample area against the manifold), or is there some other strategy?
|
# ¿ Jun 28, 2015 00:07 |
|
How do you debug a compute shader in D3D if you're not rendering anything? AFAICT all of VS's graphics debugging stuff depends on frame capture, and it doesn't capture anything if a frame is never rendered.
|
# ¿ Mar 29, 2018 05:34 |
|
I'm working on porting a very old game to Windows. The game runs at a fixed 60Hz and changing the frame rate is not an option. Is there a way to determine the monitor refresh rate in windowed mode so I can skip or duplicate frames as necessary when the refresh rate isn't 60Hz? (I might use SyncInterval 0 instead, but I'm thinking that predictable skip/duplication rates is probably more consistent, and regardless of that, I still need a way of differentiating between 60Hz and anything else.)
|
# ¿ Oct 30, 2019 03:31 |
|
Absurd Alhazred posted:Just use a timer and have the graphics loop sleep the rest of the frame, or if you want to push frames out all the time, have your physics loop do that instead. The problem is I'm not sure how to tell if the refresh rate is actually 60Hz when not in exclusive fullscreen mode.
|
# ¿ Oct 30, 2019 06:30 |
|
Absurd Alhazred posted:If you're porting an old game to a modern computer I think you're way likelier to find yourself near the start of a frame with nothing to do than near the end of a frame. What I would do to avoid jitters is make sure I'm measuring towards the next frame, not 1/60 seconds from the start of frame. Have a frame counter you advance each frame, then multiply by 1.0/60.0 to get the current target end of frame. That way you won't be compounding errors on consecutive frames. I think I'll have to just try inspecting the timestamps or something.
|
# ¿ Oct 30, 2019 17:07 |
|
I've found like 5 pages talking about how great flip model borderless windowed is and can't find anything answering the actual thing I want to know about it: How do you actually put a game in borderless window fullscreen mode? It looks like SetFullscreenMode is for exclusive fullscreen, but the documentation isn't consistent in the terminology it uses for fullscreen modes. Also, is there any info on how DXGI handles Alt+Enter under the hood? I've wound up in some weird situation where hitting Alt+Enter when the game is in windowed mode causes it to maximize the window to take up the whole monitor and then slowly expand it horizontally until it takes up all of the second monitor, and then only renders to the first monitor. Something is obviously really screwy, and I'd rather just intercept Alt+Enter and handle it myself unless there's some good reason not to, but I'd like to know what it's trying to do so I can maybe just get it to cooperate instead.
|
# ¿ Apr 6, 2020 04:23 |
|
|
# ¿ May 16, 2024 01:51 |
|
I'm porting something from D3D11 to OpenGL ES 2. D3D11 pretty much eliminated the standard attribute bindings, and the D3D version just produces screen coordinates from a generic attribute. I know GLES2 has generic attributes, but will the draw succeed if nothing is ever specified via glVertexPointer?
|
# ¿ Sep 20, 2020 18:54 |