Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mata posted:

Has anyone written a hardware skinning instancing shader?
Everything in my game uses hardware instancing so it would be quite elegant if I could get animations to work this way...
I'm in over my head a little here, but I'm wondering if it's possible to stream the animation data to the shader the same way I do with the instancing data?
I read up a little on google about skinning shaders and a common way to do it seems to be to compile the animation data to a texture, are there advantages and disadvantages to doing it that way, and would it play nice with hardware instancing?
The only other way to instance it would be to use the vertex streams, which is completely out on DX9 hardware since you'll run out of streams.

Using a texture should work fine.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

UraniumAnchor posted:

This isn't strictly 3D related, but it has to do with textures so it might belong here.

How should the alpha channel be weighted when computing the PSNR in a compressed texture? Since the amount of perceptual error not only depends on the original alpha value's absolute intensity as well as what the texture is being placed on top of, it seems like it's not a constant or easy thing, and I can't seem to find anything online about it.
If you're using premultiplied alpha, which you generally should, then the alpha and RGB channels will be compressed independently. In that case, you can just use your normal PSNR calculation (i.e. square error, or whatever), because the alpha channel is all linearly-scaling values that are proportional to whatever's behind it and the RGB channels are linearly-scaling additions.

If you're trying to compute a total PNSR value for some reason, then it might be helpful to know what this is for since that sounds like a bad idea, but you'd have to weight the alpha error and what value you choose for that is totally arbitrary.

Other things to keep in mind may be that perceptual error is greater between differences in dark intensities than bright intensities (distortion computations are often generally done in gamma space), and high-frequency hue/saturation distortion is significantly less perceptible than high-frequency intensity distortion.

OneEightHundred fucked around with this message at 19:37 on Oct 19, 2012

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Boz0r posted:

EDIT: Now that I think about it, do I even need to change anything other than giving it the other array?
BufferData and company take raw memory so they don't give a poo poo if you're giving it floats in a packed structure or in a flat array.

quote:

DOUBLE EDIT: What if I have the normals in a separate array, how do I add them?
See above for adding them interleaved to the same buffer.

The main key thing to realize is that VertexAttribPointer, DrawElements, etc. operate on the currently bound buffer. If you have stuff stored in a different buffer, bind the new buffer and use VertexAttribPointer to make that attribute use the other buffer. If you have it interleaved in the same buffer, pass the appropriate offset to VertexAttribPointer and use the same stride.

i.e.
code:
struct MyPackedVertex
{
    Vec3 position;
    Vec3 normal;
    Vec2 texCoord;
};

#define OFFSET_PTR(t, f) (&reinterpret_cast<t *>(NULL)->f)

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(MyPackedVertex), OFFSET_PTR(MyPackedVertex, position));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, sizeof(MyPackedVertex), OFFSET_PTR(MyPackedVertex, normal));
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(MyPackedVertex), OFFSET_PTR(MyPackedVertex, texCoord));

OneEightHundred fucked around with this message at 04:36 on Nov 12, 2012

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Boz0r posted:

I'm reading this GPU Gems article on shadow maps, and in the section on Percentage-Closer Filtering they say that shadow maps cannot be prefiltered. Why is this?

http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html
Because shadow maps do not store light intensity, they store depth. Depth values are either in front of the depth distance you're checking for shadowing or they aren't, so averaging a depth value that is shadowed with one that is not shadowed for instance will not give you a depth value that is half shadowed, it just gives you a new distance that is either shadowed or not.

(In case you're wondering why shadows aren't completely hard-edged from that, it's because modern GPUs will automatically filter shadow map lookups at a given depth, but the filtering only goes as far as emulating linear filtering)

Variance shadow maps CAN be filtered and anti-aliased, but they suffer from artifacts in certain scenarios.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Boz0r posted:

Can someone explain what homogeneous division is? I've tried reading up on it, but I'm still not sure what it actually is.
When you output vertex coordinates, it outputs 4 values: X, Y, Z, and W. The actual location where the vertex winds up on your screen will be X/W, Y/W and the depth will be Z/W.

The reason this is done is because the division allows the hardware to determine how to handle various aspects that can't be done with just 2D coordinates, like perspective correction, handling screen-edge clipping for coordinates that cross the W=0 plane, etc.

The depth is divided because doing that results in a non-linear depth value distribution that concentrates more depth precision close to the camera.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Heliotic posted:

Also, I have absolutely no idea what glUniform1i(uniforms[UNIFORM_TEXTURE], 0); does. Doesn't GL_TEXTURE0 activate the first texture register? Why do we need to pass '0' to the uniform as well?
The uniform gets set to a texture unit so you can have multiple samplers bound to the same texture unit if you want. It's kind of a useless feature though since texture units don't really have any relevant state other than a bound texture, but they probably split it up that way for the sake of encapsulation.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Win8 Hetro Experie posted:

What does context being lost in OpenGL mean and how should it be handled? How should resources like buffers, shaders, programs and textures be managed in general?
Windows will evict your D3D context if the application window is minimized or resized, destroying all of its resources and forcing you to reupload them and reset the device if you want to do anything. OpenGL specifically requires that data is not able to be spontaneously lost, so the Windows drivers will preserve them. It sounds like mobile devices may also be able to lose the context if the device goes to sleep, but I don't know much about that.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Win8 Hetro Experie posted:

I guess this means going back to square one and starting again from setting up the display mode and pixel format. Is there any harm or defensive programming advantage in calling delete buffers etc. with the old buffer names before allocating new ones?
Again, keep in mind that it only affects D3D and you can effectively ignore it with OpenGL.

If you are using D3D, then a lost context will still return memory regions and whatnot when you attempt to map resources, but they'll be from a dummy allocation system that throws it out when you unmap. Probably the best thing to do is just program as usual, but have a mechanism for doing a drop/reload of all resources.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

gooby on rails posted:

Incidentally, this is why a lot of older PC games are so unstable when alt-tabbing- they weren't written defensively enough to catch the invalidation at literally any point in the program.
It's not so much that they wouldn't catch it "at any point" as that they weren't well-prepared for invalidation ever. Most old games load all of their data at level start and never again, so asking them to reload data after the level's been started is a huge kludge that doesn't get well-tested. A lot of them would reference textures as D3D texture objects in which case good luck combing your entire code base to strip those all out and replace them with handles to a drop-friendly resource manager, and games are STILL having trouble with it.

If your app can support manually dropping all resources and reloading them while you're playing, then you should be able to survive a context loss without problem, since the D3D API should still be giving you responses that will at least avoid a crash until then. The best practice is probably to only reference D3D resources via a resource manager that can do a centralized drop/reload.

Android dropping the context in certain situations is documented, intended behavior.

OneEightHundred fucked around with this message at 19:47 on Feb 23, 2013

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Win8 Hetro Experie posted:

Though it's a bit odd that some of the issues mentioned, like something hogging the GPU or the driver being updated must surely apply in desktop OpenGL as well.
I looked into this a bit further and it looks like it's changed a bit. For a long time, NVIDIA and ATI were lobbying the ARB for an extension to allow for evictable contexts because it was a waste of RAM on Windows, and the ARB told them to get hosed because Windows was the only desktop or workstation OS that would do that. The result was both of them incessantly using non-compliant mapping behavior because storing copies of transient draw resources was incredibly stupid.

Apparently the ARB caved in and as of 2010, you can specify the "reset strategy" when you create a context which determines which of the two behaviors is used, one of which is to lose the context, and the "preserve everything" approach got downgraded to a recommendation.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Is there a proper (or any) way to drop the upper mipmaps from a texture in OpenGL? That is, I want to be able to drop the more detailed mipmap levels of textures that are only visible far away and stream them back in if they become visible again.

I know GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL define the valid range of mipmaps, but does changing them cause mipmaps outside of the defined range to be discarded, or are they required to be preserved in the event that the range changes back?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Xerophyte posted:

Constructing a uniform sampling over a sphere or hemisphere isn't entirely trivial.
Uniform spherical (for hemispherical, just flip it if the dot product of the direction and the hemisphere plane is negative):
code:
inline Math::FVec3 Randomizer::RandomDirectionHigh(Float32 distance)
{
    Float64 d = RandomDouble() * 2.0 * Math::Pi_k;
    Float64 s = sin(d) * Float64(distance);
    Float64 c = cos(d) * Float64(distance);

    Float64 u = RandomDouble() * 2.0 - 1.0;
    Float64 n = sqrt(1.0 - u*u);

    return Math::FVec3(Float32(n * c), Float32(n * s), Float32(Float64(distance) * u));
}
Preweighted with cosine distribution (can be simplified into a generate/rotate but whatever):
code:
inline Math::FVec3 Randomizer::RandomDirectionLambert(const Math::FVec3 &normal)
{
    Math::FVec3 side(normal[2], -normal[0], normal[1]);
    side = (side - normal * side.DotProduct(normal)).Normalize2();
    Math::FVec3 side2 = side.Cross(normal);

    Float32 rf = RandomFloat();
    Float32 u = sqrtf(rf);          // Square root = rate of photons at N = proportional to N
    Float32 n = sqrtf(1.0f - rf);

    Math::Angle<Float32> theta(2.0f * Float32(Math::Pi_k) * RandomFloatNot1());

    return side * (n * theta.Cos()) +
        side2 * (n * theta.Sin()) +
        normal * u;
}
One other critical thing to keep in mind is what you're sampling and what kind of artifacts you'll get from it. If you use random directions for every sampling point, then you'll get noise artifacts, and if you use the same directions for every sampling point, then you'll get banding artifacts. Both of those can substantially increase the number of iterations it'll take to converge on a result that looks OK. Noise artifacts are less visible with high-frequency results, banding artifacts are less visible with low-frequency results.

Most algorithms that use Monte Carlo sampling have some other way to distribute the error out that makes it less noticeable, i.e. the reason photon mapping has separate shoot and gather phases is that either by itself would produce a very noisy result with localized discontinuities, but doing both causes discontinuities to be distributed to the point that they're not noticeable.

OneEightHundred fucked around with this message at 20:36 on Mar 19, 2013

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Schmerm posted:

Now, BindTexture calls are effectively replaced with lots and lots of glUniform calls to tell my fragment shader's sampler2D objects which texture unit to sample from (I am using GL 2.1 and GLSL 1.20, by the way).
Seconding calls to profile it, but I highly doubt that it'll make a difference because the driver probably doesn't care that much which way you used to tell it to fetch from a different texture, it cares that you changed which texture you're using.

Generally, changing anything about what resources data is retrieved from and how they're used (i.e. shaders) is pretty expensive. Spamming out draw calls from the same bound resources while not changing anything is cheap (in D3D10 and OpenGL at least, D3D9 not so much).

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suspicious Dish posted:

Well, multitexturing has to work somehow, so there are different texture units on the card itself.
The separation of samplers and texture units isn't because it's cheaper to switch texture units than to switch textures, it's because the texture units can contain state related to accessing the texture that isn't part of the texture itself (i.e. via the glTexEnv settings). That matters more in D3D (which has clamp/mipmap stuff in the samplers) than OpenGL, but it's the same reasoning.

Rebinding samplers is still changing how the GPU is going to access the texture data though, so it's probably going to incur similar costs.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Mata posted:

Is skinned instancing (as described in this pdf) a DX10 technique? The paper doesn't describe why it can't be implemented in DX9. Writing the bone matrix data to a texture shouldn't be a problem, and I don't see why reading it would be a problem either. And that's pretty much the core of this technique...
NVIDIA is only interested in talking about techniques that require (and in turn, sell) their latest generation of hardware, so they're never going to offer insight on implementing techniques on older hardware. The main thing they do in this case is stuff instance data into a constant buffer, which is probably only viable on DX10 because constant manipulation is much slower on DX9, but you could always do this (and they acknowledge as such) by stuffing the data into vertex streams instead.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

The point of that article was to give an example of how to use instancing (a DX10 feature) to render a huge amount of skinned geometry effectively. You can issue one Draw call, with one model's worth of vertices and a texture containing all of the skinning data, rather than render rendering 10,000 models one at a time or any other much more cumbersome approaches. The core idea (or the one you're referring to at least) should be portable to DX9, you're right -- and in fact is an approach that's been used in a few DX9 engines.
Instancing is a DX9 feature. The paper specifically acknowledges that this was possible before using vertex streams to store instance data, the point of it (i.e. what it was purporting to offer over known techniques) was that you could get more performance by using a constant buffer instead.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suspicious Dish posted:

This might be a bit off-topic, but is there a standard model interchange format that can be imported/exported from most 3D environments that has complex materials support, almost like custom shaders?
I think COLLADA and FBX can both do that, but I'm not sure what they're capable of. The odds of being able to port shaders directly to or from those formats to a game is pretty low because of how application-specific shader implementations are.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Twernmilt posted:

The outer edge of the outline is anti-aliased, presumably because it represents the edge of the polygon. The inner edge, however, is not anti-aliased and it looks terrible. Is there some way to tell OpenGL to anti-alias every fragment that gets rendered by a particular shader or specify which fragments to anti-alias? Do I simply have to use geometry to get something anti-aliased?
Keep in mind how MSAA works, which is that it only renders the color for ONE screen pixel, but will determine occupancy and occlusion for all subpixels. This means that while you'll get anti-aliasing on polygon boundaries, you will NOT get it on high-frequency effects from your pixel shaders on the same polygon.

If you want anti-aliasing on fragment effects, you either need to use some sort of fake AA (i.e. FXAA), or store non-linear functions in textures so that anisotropic filtering can take care of rapid shifts in value.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Twernmilt posted:

I'm sort of using it as an excuse to learn how to render to a texture. What do you mean by "high frequency fragment effects" exactly?
The faster something changes to a new value, compared to its rate of change in the surrounding area, the higher frequency is, so "sharp" or "noisy" phenomena are high-frequency while "blurry" or "flat" phenomena are low-frequency.

Aliasing itself is a high-frequency phenomenon, caused by changes happening in a tighter spatial interval than the pixels on your screen. Because MSAA only computes pixel values once for each screen pixel and only does supersampling to prevent aliasing from coverage, pixel shaders that cause abrupt changes in the final value over a very small area on the surface of a polygon (like an edge-blackening shader might), you'll get aliasing.

Some other things like reflections/specular (especially on bumpy surfaces) are also prone to aliasing because the highlight is so small, which screen-based AA filters like FXAA can resolve.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
So, one thing's really been bugging me about heightmap terrain. Generally, the ideal size of a terrain heightmap is a power of 2 + 1 height samples per axis, because then performing LOD on it is just a matter of collapsing it to half resolution, and it can be partially collapsed.

How do you align textures with that though (i.e. alpha masks for terrain features), since those generally need to be (or perform much better if they're) a power of 2 on each axis instead?

OneEightHundred fucked around with this message at 00:07 on Jan 1, 2014

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Hubis posted:

Second, it doesn't matter because your geometry is at (2^N + 1) but your textures can still just be (2^M). If M = N, you'll end up with one texel per "square" in the geometry, since the heightmap describes the edges of the patches. Draw it out and you'll see what I mean.
That's the problem though, they'll have one texel per quad, but they'll be centered on the terrain quads, which has problems with the edges in particular because the area between the edge texels and the edge of the terrain quads will either be clamped or mirror the opposite side of the terrain.

What I'd like is for the terrain texels to be centered on the mesh points, but doing that requires a 2^N+1 texture.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Grocer Goodwill posted:

Though, this is less relevant on modern scalar-only hardware.
IIRC, AMD does use vector ops, but it breaks everything into scalar operations and re-vectorizes them during code generation.

Also "mul" doesn't actually exist in glsl so you're free to just #define mul(a,b) ((b) * (a))

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
PNG for lossless, JPEG for disk size, DDS if you need DXT compression.

If you're doing one of the first two in C/C++, use FreeImage. If you really need to roll your own loader for non-compressed images for some reason, use TGA and DDS.

OneEightHundred fucked around with this message at 05:14 on Jan 8, 2014

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Spite posted:

EDIT: I may be misremembering NI, which I think is purely vector all the time. Is that what you meant? In terms of GPGPU they'd have to vectorize everything.
Looks like I'm wrong, tried some stuff in ShaderAnalyzer and it's only dumping scalar ops in the final assembly. It used to vectorize the instructions despite the IL being scalar, but apparently they don't do that any more I guess.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
That reminds me:

How is photon mapping supposed to work with complex geometry? Specifically, how do you handle collection in a way that both avoids the noise of small polygons getting hit by single-digit numbers of photons (or zero), without bleeding through solid surfaces? Or is the shoot phase just supposed to be really inaccurate?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

lord funk posted:

The entire internet needs a filter based on the version of OpenGL you actually want to learn about. I remember what a nightmare it was to teach myself OpenGL ES 2.0, and all the answers I found were 1.x.
Honestly, they should have been purging the API of obsolete features ages ago, instead it's March 2014 and people are still recommending immediate mode.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Colonel J posted:

Has anyone ever toyed around with Ramamoorthi's article An Efficient Representation for Irradiance Environment Maps ? I have a radiance probe I want to convert to irradiance with the spherical harmonics technique; I've started writing a Python program to implement the technique in the article but I'm pretty sure I'm doing it the wrong thing. If anybody had advice or even an example of implementation I'd be forever grateful!
Halo 3 did basically the same thing. I condensed their implementation into spoilers.

A word of caution though, the "1% error" thing is a deceptive. It's very low-error when you're integrating a shitload of mostly-similar sources from a large number of sources, like an omnidirectional environment probe or GI, because the error gets flattened out by the monotony of the environment itself. It it NOT low-error when dealing with sources that come from a narrow angle range, or in the worst case, a single direction. It can suffer from ring artifacts in those cases due to the intensity not being slightly positive at what should be the horizon, then going negative, and then going slightly positive again on the opposite side of the object.

In other words, it's good for an indirect lighting probe, it's not very well-suited to integrating primary light sources.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
.ma files are ASCII and have a spec (http://download.autodesk.com/us/may...umber=d0e677725).

I think it has the same flavor of problems that FBX/COLLADA can have though: Basically everything you want is buried under multiple layers of functionality, it takes a good amount of work to dig the data you want out, and you have to convert anything that's in a format that you can't handle (which can potentially be very difficult).

I don't know if .ma is any worse in that respect than FBX/COLLADA, but at least with FBX/COLLADA there are third-party converters already.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
D3D11's documentation re: usage modes is confusing me, especially about whether it's possible to write to a default-usage buffer from the CPU (the usage docs seem to suggest it isn't, but the UpdateSubresource docs suggest that it is).

Is there an efficient way to set up something so that I can fill a structured buffer with a bunch of data with the CPU, then have a compute shader do some work on it that transforms it in-place, then use it as input to a later draw call? Or do I need to create a dynamic buffer for the CPU-written stuff and a default-usage buffer for the output?

(I'm trying to have a compute shader do IDCT and doing it in-place will use half as much memory if it's possible, basically.)

OneEightHundred fucked around with this message at 04:22 on Apr 18, 2015

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Is it legal to alias the outputs of a compute shader as a different type in another shader?

Like, if I have a compute shader output to a structured buffer that it thinks contains this:
code:
struct BlockV
{
    int4 stuff[8]
};
... and a pixel shader reads the same structured buffer thinking that it contains this:
code:
struct Block
{
    int stuff[32];
};
.... is that supported behavior?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
How does photon mapping typically deal with off-manifold areas in the final gather phase? Like, if you're sampling a point that's at the edge of a surface, half of the sample area is going to be off the surface where photons can't actually hit, which would make the result artificially dark. Do you have to compute it as the area as that's actually hittable (i.e. by clipping the sample area against the manifold), or is there some other strategy?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
How do you debug a compute shader in D3D if you're not rendering anything? AFAICT all of VS's graphics debugging stuff depends on frame capture, and it doesn't capture anything if a frame is never rendered.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'm working on porting a very old game to Windows. The game runs at a fixed 60Hz and changing the frame rate is not an option. Is there a way to determine the monitor refresh rate in windowed mode so I can skip or duplicate frames as necessary when the refresh rate isn't 60Hz? (I might use SyncInterval 0 instead, but I'm thinking that predictable skip/duplication rates is probably more consistent, and regardless of that, I still need a way of differentiating between 60Hz and anything else.)

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Absurd Alhazred posted:

Just use a timer and have the graphics loop sleep the rest of the frame, or if you want to push frames out all the time, have your physics loop do that instead.
That's basically what would happen in the SyncInterval 0 option. The only reason I'm even worrying about this is because of the potential pathological case at 60Hz where the present time is close to a frame boundary, in which case timing jitter could cause it to randomly overwrite the pending frame before it's been displayed, causing frame drops and duplication. I dunno if that's really much of an issue, but using SyncInterval 1 at 60Hz would avoid it since it would never overwrite a pending frame.

The problem is I'm not sure how to tell if the refresh rate is actually 60Hz when not in exclusive fullscreen mode.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Absurd Alhazred posted:

If you're porting an old game to a modern computer I think you're way likelier to find yourself near the start of a frame with nothing to do than near the end of a frame. What I would do to avoid jitters is make sure I'm measuring towards the next frame, not 1/60 seconds from the start of frame. Have a frame counter you advance each frame, then multiply by 1.0/60.0 to get the current target end of frame. That way you won't be compounding errors on consecutive frames.
I'm not worried about it taking too long to do a frame, I'm worried about the relative timing of the presents and the vblank intervals. If I draw and present frames at 60Hz with SyncInterval 0, that might work fine, unless the presents are happening close to the time when the buffer is sent off to the DWM/monitor, in which case any timing jitter would cause the presents to randomly happen before or after that boundary, causing overwrites of the previous frame (drops) or frames where nothing was presented (duplicates). SyncInterval 1 would block until a buffer is freed up, putting it at the start of the interval, but is synchronized with the monitor refresh rate (and maybe other things?).

I think I'll have to just try inspecting the timestamps or something.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I've found like 5 pages talking about how great flip model borderless windowed is and can't find anything answering the actual thing I want to know about it:

How do you actually put a game in borderless window fullscreen mode? It looks like SetFullscreenMode is for exclusive fullscreen, but the documentation isn't consistent in the terminology it uses for fullscreen modes.


Also, is there any info on how DXGI handles Alt+Enter under the hood? I've wound up in some weird situation where hitting Alt+Enter when the game is in windowed mode causes it to maximize the window to take up the whole monitor and then slowly expand it horizontally until it takes up all of the second monitor, and then only renders to the first monitor. Something is obviously really screwy, and I'd rather just intercept Alt+Enter and handle it myself unless there's some good reason not to, but I'd like to know what it's trying to do so I can maybe just get it to cooperate instead.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'm porting something from D3D11 to OpenGL ES 2. D3D11 pretty much eliminated the standard attribute bindings, and the D3D version just produces screen coordinates from a generic attribute. I know GLES2 has generic attributes, but will the draw succeed if nothing is ever specified via glVertexPointer?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply