|
BernieLomax posted:As far as I remember, for clamped texturing the speed is almost the same since you avoid a mod for every texture lookup.
|
# ¿ Dec 16, 2008 08:50 |
|
|
# ¿ May 15, 2024 10:35 |
|
You don't want to compile Mesa 3D, you do want to use it to get OpenGL-compatible headers. It comes with gl.h, get glext.h and wglext.h from here, and either link against opengl32.lib, or load opengl32.dll and get the entry points manually. As for lovely performance, the 915G chipset (a.k.a. GMA950, the most common Intel IGP right now) performs worse than a GeForce 2, so don't be surprised. They're so alarmingly bad that you really only have two sane design decisions: Make your game look like it was made in 1997, or don't support Intel IGPs. OneEightHundred fucked around with this message at 10:14 on Dec 17, 2008 |
# ¿ Dec 17, 2008 10:10 |
|
Normally you want to use framebuffer objects and multiple render targets for that (using the extensions that conveniently have the same names!), pbuffers involve expensive context switches and kind of suck.
|
# ¿ Dec 17, 2008 16:20 |
|
heeen posted:The OP specifically said he didn't want to use FBOs. What's wrong with FBOs? (i.e. why would you ever want to use pbuffers over them?)
|
# ¿ Dec 17, 2008 22:31 |
|
heeen posted:I'm having a problem with non power of two textures under OpenGL, this is what I'm getting:
|
# ¿ Jan 14, 2009 18:01 |
|
You can also take advantage of D3DX's ID3DXMesh::OptimizeInPlace function, which will handle index optimization. It doesn't depend on using D3D for rendering, and you can do it either during load or in the content pipeline (the latter being preferred).
|
# ¿ Jan 19, 2009 00:57 |
|
mat3 tbn=mat3(svec, tvec, ws_normal); Try that.
|
# ¿ Feb 9, 2009 21:52 |
|
You can't really do that because any point on the model can affect the bounds. i.e. a sphere will always have the same box but a diagonal line won't even though they may have the same bounding box at given configurations. Best thing to do would be either precompute the bounding box on given animation frames and expand it a bit if you're compositing animations, or use various extent points (i.e. hands, elbows, head, gut, back, knees, feet) to determine it.
|
# ¿ Feb 13, 2009 01:29 |
|
StickGuy posted:At least with .NET, bitmaps are ARGB. http://msdn.microsoft.com/en-us/library/dd183376(VS.85).aspx quote:The blue mask is 0x000000FF, the green mask is 0x0000FF00, and the red mask is 0x00FF0000
|
# ¿ Mar 30, 2009 08:35 |
|
Small White Dragon posted:Cross posting from the iPhone thread, maybe you guys would know: Make sure the alpha texture is uploaded with ALPHA as the internal representation (when you call TexImage2D), not LUMINANCE or RGB. Using MODULATE with a LUMINANCE/RGB texture will cause the color to be multiplied by the texture and leave the alpha channel alone (which sounds like what you're experiencing), using MODULATE with an ALPHA texture will cause the alpha channel to be multiplied by the texture and leave the color alone. OneEightHundred fucked around with this message at 10:09 on Mar 31, 2009 |
# ¿ Mar 31, 2009 09:55 |
|
PnP Bios posted:I don't think thats right, you should only need to turn on the texture state once.
|
# ¿ Apr 2, 2009 02:03 |
|
With GLSL, is there any way to change JUST the vertex or pixel shader to a different one that doesn't involve linking a new program object?
|
# ¿ May 18, 2009 01:31 |
|
Khronos showed that OpenGL is going to be a dinosaur playing catch-up forever on the consumer market because the CAD industry loves dinosaurs. Only time will tell if the deprecation model has any teeth, OpenGL 3.1 at least caught up with uniform buffers and instancing, but it's still missing features that should have been added ages ago. i.e. why can't I get the compiled code for shaders and reload it later to avoid glacial load times and huge hitches during hot-loading? Why are all stages coupled into one program object, requiring ridiculous numbers of permutations, especially considering the previous issue? Where's the atomic object creation promised for 3.0? As for getting tutorials for it: Try to think of OpenGL 3.0 as OpenGL 2.2, except with "GL3/gl3.h" as your include file. What you really want to get in to is using GLSL instead of the fixed-function poo poo, since they hardly changed anything else. OneEightHundred fucked around with this message at 17:15 on Jun 6, 2009 |
# ¿ Jun 6, 2009 16:49 |
|
The pipeline with GLSL looks like: Uniforms + samplers + per-vertex data --> vertex shader [--> geometry shader] --> depth test --> fragment shader --> alpha test (ugh) --> blend --> target. The vertex shader takes over all of the per-vertex operations (including transform and lighting), the fragment shader takes over all of the per-pixel operations. Jo posted:I'm just surprised. I always shrugged GLSL off as nothing more than a way of applying fancy textures. That's what 'shader' meant to me. To see that it does lighting and geometric transforms is a very strange and eye-opening realization.
|
# ¿ Jun 7, 2009 21:40 |
|
Vertex arrays are generally a bad idea because it prevents the driver from using vertex cache at all. Using an index buffer means that it can cache the results of vertex shader runs so it doesn't have run the vertex shader again for that vertex. Vertex buffer objects are the preferred way of doing things, and the equivalents of SetInputLayout and SetIndexBuffer are not skippable in those cases.
|
# ¿ Jun 14, 2009 07:02 |
|
Digital Spaghetti posted:Just as a sidenote - wow, there is really no resources out there for doing 2D scrolling games with OpenGL, for beginners like me - especially with ES.
|
# ¿ Jul 9, 2009 22:19 |
|
Looks like the scene is rendered into a reflective shadowmap, which is used to generate point sources that propagate into a volume texture of SH coefficients. Looks like this is repeated, scene probes collect light from the propagation volume and scatter them back out into the light volume for multiple bounces. This isn't a staggeringly inventive approach, it's just that they managed to make most of this poo poo happen on the GPU without bouncing back to the CPU which is the real accomplishment.
|
# ¿ Aug 13, 2009 06:21 |
|
If you're even thinking about landscapes then you probably want to grab World Machine.
|
# ¿ Aug 13, 2009 17:29 |
|
slovach posted:Need to set an FVF if I'm using ID3DXSPRITE? i.e.: http://msdn.microsoft.com/en-us/library/bb174365%28VS.85%29.aspx quote:What about a cullmode? Isn't counter clockwise typically the default?
|
# ¿ Aug 31, 2009 14:14 |
|
Avenging Dentist posted:They're not that bad when you're just figuring poo poo out, since there's less to think about with FVF. Re: Sprite sorting, yes, any state change (changing buffer, shader, texture, render target, etc.) is expensive, draw calls by themselves are expensive too. The number of draw calls you make is one of the biggest limiting factors in DX9. The best thing to do to minimize both if you need sorted sprites is to use an atlas, which is one texture containing a lot of different sprites, and each sprite uses a piece of that texture. Since your sprites don't tile, this shouldn't be a problem. OneEightHundred fucked around with this message at 12:32 on Sep 1, 2009 |
# ¿ Sep 1, 2009 12:25 |
|
A better question would be what you are trying to do. ReadPixels has been glacially slow for about as long as OpenGL has existed, mainly because it's a blocking operation. Even apps that do use it for feedback (i.e. histograms) tend to rescale the scene to a smaller buffer to cut down on the amount of data that has to be transferred.
|
# ¿ Sep 3, 2009 09:33 |
|
Option A: Make a 2D texture, colorize it based on the height, and assign the texture coordinates of the "terrain" mesh to the corresponding locations on that texture. Option B: Make a 1D texture containing the colors in order, and reference it based on height. OneEightHundred fucked around with this message at 17:30 on Sep 8, 2009 |
# ¿ Sep 8, 2009 17:28 |
|
Yeah you could do that too. A side-effect of Option A is you can use a lower-resolution mesh than the data and it will still look OK. A side-effect of Option B is that it'll transition properly with high-frequency data, making it probably the best choice.
|
# ¿ Sep 8, 2009 17:31 |
|
edit: Actually I'm confused. Is this an indexed array of vertices, or do each consecutive three vertices represent one triangle? OneEightHundred fucked around with this message at 06:46 on Oct 3, 2009 |
# ¿ Oct 3, 2009 06:39 |
|
The tangent/binormal generation code looks OK, though there is a possibility it's being mishandled elsewhere. While the effect of this depends VERY heavily on the scale you're doing things at, I would strongly recommend avoiding the use of oversized vector components, and swizzle away components you don't need so you don't accidentally operate on them. i.e. something like this... quote:tcVarying = vec2(fullTexture.x, fullTexture.y); quote:tcVarying = fullTexture.xy; The big problem is when you get stuff like this: quote:vec4 tangentEye = normalize(normalMatrix * tangent); The last row/column (depending on convention) of a transform matrix is generally (0,0,0,1), so the W component would get copied. The side-effect of this is that tne is that the W component of (normalMatrix * tangent) is non-zero, and will consequently affect the result of the normalize. That goes double for this case, where tangent.w may be -1 or 1. The dot product may result in even more issues if lightVector.w is non-zero. Generally speaking, you should use swizzling to remove components you're not using to prevent unintended side-effects. i.e. something like: quote:vec3 tangentEye = normalize((normalMatrix * tangent).xyz); Best thing you can do for now though is make a normal map where the direction things should be facing is unmistakable, i.e. put a white circle on a black background and just run your heightmap-to-normalmap filter of choice on it, and make sure they are actually getting flipped AND getting flipped inconsistently (as opposed to being flipped consistently in which case you just negate the bitangent or something.)
|
# ¿ Oct 4, 2009 17:32 |
|
Static linking to opengl32.dll is something you should never do anyway, since the program will close if a proc lookup fails, making it impossible to support optional features. You can macro it to make things a bit easier, i.e.: code:
code:
|
# ¿ Nov 10, 2009 20:21 |
|
Creating a list of unique vertices is something you do during preprocessing, not runtime.
|
# ¿ Nov 13, 2009 13:21 |
|
Hubis posted:not without doing some pixel shader tricks, no.
|
# ¿ Jan 20, 2010 14:42 |
|
Make sure you enable color writes with glColorMask before clearing?
|
# ¿ Feb 10, 2010 02:19 |
|
Contero posted:Is there a straightforward way of getting rid of this problem? It seems like it should be fairly common. 0 1 2 3 You're doing something like for example, always using (0 1 3) (0 3 2) as the triangles. Don't do this. Instead, alternate which direction the diagonal goes. i.e. if (heightmap x coord + heightmap y coord) is even, use (0 1 3) (0 3 2), and if it's odd, use (0 1 2) (1 3 2) OneEightHundred fucked around with this message at 06:00 on Mar 4, 2010 |
# ¿ Mar 4, 2010 05:56 |
|
Contero posted:I could have sworn you said change the order based on just the Y coord before. It looks less regular, but it's still there. Something like this:
|
# ¿ Mar 5, 2010 05:39 |
|
Hubis posted:You'll still see the artifacts (the underlying problem of the quads being non-planar will remain) but it will reduce the visibility of it.
|
# ¿ Mar 6, 2010 04:30 |
|
roomforthetuna posted:1. I'm pretty sure the former method is well supported, but how well supported is the latter? The latter is much more common as far as I know, precisely because you can cram more bones in per draw call. quote:3. If you're rendering a complex object, is it better to have one big mesh with four weights per vertex (most vertices just being 100% on one bone, so the weighting is overkill and unused for many of the vertices), or to render the object as a number of meshes so as to reduce the weighting to two or three per vertex (so there's a separate mesh around each joint), or even to render the object as even more meshes so that blocks which are bound to only a single bone are rendered unweighted, and only the triangles which need their vertices skinned are rendered with weights? If vertex processing speeds are an issue, then consider weight pre-blending. i.e. instead of sending the bone matrices as uniforms, send each unique matrix/weight combination, and index that per-vert instead. SSE can be used to speed this up a bit.
|
# ¿ Mar 26, 2010 04:19 |
|
krysmopompas posted:As a side-note, if you want to get fancy, http://www.emergent.net/GameFest2008 is an interesting way of bypassing the draw call limitations. Incidentally this sort of thing isn't even new, Quake 3 did it.
|
# ¿ Mar 26, 2010 04:59 |
|
You should make your data files index weight blends because you can easily convert that to any of the several ways of processing it. I can name at least three major ways of processing skeletal deformation, and indexed weight blends are the only approach that can be trivially loaded as all three. i.e. see AX3SkeletalDeform in this: http://svn.icculus.org/*checkout*/teu/trunk/tertius/docs/tmrf.html?revision=227
|
# ¿ Mar 26, 2010 05:18 |
|
krysmopompas posted:There are a lot of reasons, the paper covers that. There's an additional presentation on the same subject from this year's gdc by some atvi guy that I haven't noticed out in the wild yet too. quote:Using opengl display lists is certainly an inventive way of bypassing d3d9's overhead.
|
# ¿ Mar 26, 2010 06:55 |
|
Alternately use a quaternion for orientation since you can just renormalize it to prevent floating point error from accumulating.
|
# ¿ Apr 1, 2010 11:59 |
|
Sutherland–Hodgman can be simplified in the case of convex windings because you can guarantee there will never be more than 2 intersection points per plane. i.e.:code:
|
# ¿ Apr 9, 2010 04:03 |
|
Contero posted:Well I've just spent a bunch of time trying to get FBOs to work and the most random thing seems to be causing the error. Try a different format (i.e. not GL_RGBA8) Try glGenerateMipmapEXT(tex)
|
# ¿ Apr 22, 2010 04:48 |
|
|
# ¿ May 15, 2024 10:35 |
|
Yeah dealing with pixel format validity in D3D is just as obnoxious. Once you start running into issues caused by lovely driver GLSL compilers, THEN you'll have a real reason to ragequit OpenGL!
|
# ¿ Apr 22, 2010 18:38 |