|
HolaMundo posted:Thanks for the answers. code:
You can also, though I'm not sure if it's a horrible idea that you shouldn't do, have a shader variable that you set which determines which version of the shader function to call from a single technique - I've done that something like this: code:
|
# ? Nov 24, 2011 04:05 |
|
|
# ? May 15, 2024 09:03 |
|
zzz posted:I haven't touched GPU stuff in a while, but I was under the impression that static branches based on global uniform variables will be optimized away by all modern compilers/drivers and never get executed on the GPU, so it wouldn't make a significant difference either way...? You'd hope so, but I wouldn't assume that! The vendors do perform various crazy optimizations based on the data. I've seen a certain vendor attempt to optimize 0.0 passed in as a uniform by recompiling the shader and making that a constant. Doesn't always work so well when those values are part of an array of bone transforms, heh. Basically, you don't want to branch if you can avoid it. Fragments are executed in groups, so if you have good locality for your branches (ie, all the fragments in a block take the same branch) you won't hit the nasty miss case.
|
# ? Nov 24, 2011 08:18 |
|
Cross-posting this from the Mac OS/iOS Apps thread. I'm trying to do something with cocos2d, but have been told that it will involve some OpenGL ES work to achieve what I'm looking to do. This is just a mockup I whipped up in my photo editor. Let's say I want to load in an image like the one of the left but alter it at runtime to look like the one on the right. I have no idea where to start with plotting texture points to a 2D polygon and all that jazz. Any suggestions or simple examples that would help?
|
# ? Nov 29, 2011 21:36 |
|
nolen posted:Cross-posting this from the Mac OS/iOS Apps thread. Split your quad into 2 quads. Then look into an affine transform. Also cocos2D is one of the worst pieces of software known to man (not helpful, I know...)
|
# ? Dec 1, 2011 21:14 |
|
HolaMundo posted:How should I handle having multiple shaders? The ideal solution is to use the pre-processor to #ifdef out sections of the code based corresponding to different features, then pass defines to the compiler as macros and generate all the permutations you might need. However, it's a lot simpler (and practically as good) to just place the code in branches, and branch based on bool parameters from constant buffers. So long as the branches are based completely on constant buffer values you shouldn't see any problem. This solution is almost as good as using defines on newer hardware; on older hardware (GeForce 7000-era) you might see some slightly slower shader loading/compilation time, but almost certainly not noticeable unless you're doing lots of streaming content. zzz posted:I haven't touched GPU stuff in a while, but I was under the impression that static branches based on global uniform variables will be optimized away by all modern compilers/drivers and never get executed on the GPU, so it wouldn't make a significant difference either way...? Spite posted:You'd hope so, but I wouldn't assume that! The vendors do perform various crazy optimizations based on the data. I've seen a certain vendor attempt to optimize 0.0 passed in as a uniform by recompiling the shader and making that a constant. Doesn't always work so well when those values are part of an array of bone transforms, heh. this should work, so long as the static branch is a bool. Woz My Neg rear end posted:It's almost always preferable to a true conditional to run the extra calculations in all cases and multiply the result by 0 if you don't want it to contribute to the fragment. This is the opposite of true; do not do this. Hubis fucked around with this message at 02:31 on Dec 6, 2011 |
# ? Dec 6, 2011 02:18 |
|
I am trying to render some very basic sprites with SlimDX. Everything works fine, I have a simple shader which basically just passes values through. In the shader, my texture sampler has AddressU and AddressV set to 'clamp', but I'm seeing 'bleeding' on the edges of my sprite. That is, if a pixel on the far right edge is black, you can see black bleeding into the empty space on the left at the same Y values. 1) I don't see why this is happening at all, even if I had it set to 'wrap', the coords are (0,0) and (1,1) 2) This only started happening when I added an alpha channel to the texture. Before it was just a character on a white background, now the white background has been replaced with transparency. 3) The texture is not a power of 2, it's something like 23x46 (it's just a test sprite). I am under the impression that power of 2 textures aren't really required anymore...plus, as I said, this worked before transparency. I can post some code later tonight, but does anyone have any ideas just from seeing these symptoms? I have tried debugging the bleeding pixels with PIX and the tex2d call is returning the erroneous values.
|
# ? Dec 6, 2011 21:25 |
|
Orzo posted:I am trying to render some very basic sprites with SlimDX. Everything works fine, I have a simple shader which basically just passes values through. In the shader, my texture sampler has AddressU and AddressV set to 'clamp', but I'm seeing 'bleeding' on the edges of my sprite. That is, if a pixel on the far right edge is black, you can see black bleeding into the empty space on the left at the same Y values.
|
# ? Dec 6, 2011 23:38 |
|
Thanks, I'll look into that. Actually, this leads me to another question which I bet is related. I was doing some testing of this issue by modifying the image, saving it, and re-running my program. And I'd get really really weird results where the result image was the previous image overlapped with the new one (at this point the old one didn't even exist on disk anymore!), and I thought it was some sort of weird graphics memory caching thing. For example, the original image was a picture of Samus from Super Metroid. I replaced it with a white square. The result was samus with a white square blended together. Is it possible for things like this to happen due to the underrun?
|
# ? Dec 6, 2011 23:57 |
|
Orzo posted:Is it possible for things like this to happen due to the underrun? e: Actually that's another distinct possibility: A lot of mipmap generation filters like bicubic have sampling areas larger than just a 2x2 box from the previous level, so you can potentially get spillover from the other edge of the image if it's assuming that the texture tiles. OneEightHundred fucked around with this message at 00:39 on Dec 7, 2011 |
# ? Dec 7, 2011 00:13 |
|
I just wanted to report that the problem went away when I changed the dimensions to powers of 2. I thought that wasn't a requirement anymore...what gives?
|
# ? Dec 7, 2011 02:14 |
|
Orzo posted:I am trying to render some very basic sprites with SlimDX. Everything works fine, I have a simple shader which basically just passes values through. In the shader, my texture sampler has AddressU and AddressV set to 'clamp', but I'm seeing 'bleeding' on the edges of my sprite. If you're using DirectX9, then this will explain the problem and how to solve it.
|
# ? Dec 7, 2011 14:20 |
|
I don't know how much you guys know about path tracing and teh rendering equation but I've got a question about it: In all of the simple algorithms for path tracing using lots of monte carlo samples that I see in lecture notes the tracing function of the algorithm randomly chooses between returning with the emitted value for the current surface and continuing by tracing another ray from that surface's hemisphere (for example in the slides here). Like so: code:
Also why do the algorithms that don't use this choosing between emission and reflective components instead only count the first emissive element? The rendering equation for a point is the emissive element plus the integral of the incoming light, which is itself the emissive element for a point and the integral of incoming light of other surfaces, so why not count it for each bounce of each path?
|
# ? Dec 7, 2011 21:08 |
|
I'm using SlimDX with Direct3D 10.1 and I created a 2D 32x32 texture of format A8_UNorm set up for usage as a shader resource and flagged for write access. When I map the texture to get at a stream to write into it (using WriteDiscard and mip level 0), the stream's stated Length is 2048 bytes long which doesn't make any sense to me (assuming that the 8 in A8_UNorm means 8 bits, 32 * 32 * 1 = 1024). The DataRectangle that I get from the mapping says that the pitch is 64 as well. It does the same thing if I use R8_UNorm. R32_Float says 4 bytes per texel, which is correct. Any idea why the stream seems to be twice as long as it should be when I'm using one of the *8_UNorm formats? I'm relatively new to SlimDX (and D3D in general) so perhaps I've made some bad decisions here. code:
code:
|
# ? Dec 15, 2011 00:30 |
|
It could be that you're requesting one texture format and it's giving you the closest thing it knows of that matches. I don't know much about D3D, but I'm pretty sure that's possible in OpenGL.
|
# ? Dec 15, 2011 08:44 |
|
ZombieApostate posted:It could be that you're requesting one texture format and it's giving you the closest thing it knows of that matches. I don't know much about D3D, but I'm pretty sure that's possible in OpenGL.
|
# ? Dec 15, 2011 15:20 |
|
I tried the identical code on my ATI card at home as opposed to my Quadro at work, and it returns a stream of the proper size and pitch. I'm really confused, so I'm just using R32_Float for now. Thanks anyway
|
# ? Dec 17, 2011 04:27 |
|
If I had to guess, the hardware hardware probably wants 64 bytes per row alignment for some reason. It's perfectly within spec, the entire point of pitch is that it's the actual number of bytes per row with any necessary alignment padding, not just a repeat of width * pixel size. I've never heard of it being as high as 64, but you should respect it regardless as it does come in to play with things like RGB8 textures at low resolutions. OneEightHundred fucked around with this message at 06:11 on Dec 17, 2011 |
# ? Dec 17, 2011 06:08 |
|
I apologize complete Noob here, don't have too much experience with OpenGl but i'm trying to make a hack for an game using a proprietary Opengl engine with no available SDK or source-code to download. I'm able to successfully Detour OpenGL functions, and have been using glDisable(GL_TEXTURE_2D) under functions glBegin(), glDrawElements, glDrawArrays to check what objects are being rendered with what commands. These are the only drawing functions that I see on the list of imported opengl functions in the game in Ollydbg. Interestingly Deletelists is referenced in Olly, but no corresponding call to functions related to Displaylist creation. It's successfully disabled all model textures, except for the ground, foliage which still have textures enabled, trees and certain static objects . I don't understand how they are still being drawn even though I think i've detoured all the drawing functions. I'm not sure what i'm doing wrong, missing here, or whether Olly is not displaying the entire list of gl functions being used for some reason. Any input and ideas from OpenGL guru's here on what's going on would appreciated. ickbar fucked around with this message at 10:59 on Dec 25, 2011 |
# ? Dec 25, 2011 10:54 |
|
ickbar posted:I apologize complete Noob here, don't have too much experience with OpenGl but i'm trying to make a hack for an game using a proprietary Opengl engine with no available SDK or source-code to download. Shot in the dark here but maybe they are using some extension wrangler (like GLEW) to get access to various API entry points? (e: thus mangling up the symbols/names)
|
# ? Dec 25, 2011 18:32 |
|
:dumbpost:
Bisse fucked around with this message at 22:31 on Dec 25, 2011 |
# ? Dec 25, 2011 22:28 |
|
thanks for the input, I thought it over and I guess it doesn't matter as I think I could disable glDrawelements once i konw it's drawing the texture I want to disable. All that leave is preforming model recognition w/o having an sdk or open source material to look at. Which leaves either texture crc recognition or using asm to find texturename at run-time execution. Which really sucks, since i've been learning for only a few days and will need to spend a year learning how win32 code executes in low level assembly to get to that point.
|
# ? Dec 27, 2011 07:55 |
|
The default OpenGL implementation in Win32 doesn't have all the entry points. All the modern stuff is requested from the driver via glwGetProcAddress. So you can break on that and see what it returns. Or you can use one of the various tracing interposers to get a call trace and see what it's doing. When I've done similar stuff to what you are describing I've taken a CRC of the texture when it's passed in to glTexture2D and recorded the id that's bound to that unit. Then you can store that away and do whenever you want when it's bound again.
|
# ? Dec 28, 2011 07:46 |
|
I'm trying to do something like this http://www.youtube.com/watch?v=mw2dm5oIN6Q&feature=related (you might want to lower the volume before clicking) in 3D in OpenGL. I'm currently reading up on VBO:s but I'm not sure how to generate the rooms and connect them with corridors. My original idea was to create a box, scale it to make a room and then "carve out" a hole in one of the walls and create a new box and scale that to make a corridor and then put a new room in the end of the corridor etc. Doing it in 2D seems easier, just using tiles and switching out wall-tiles to floor-tiles to make doors and corridors. But how should I do it in 3D?
|
# ? Dec 28, 2011 11:48 |
|
Claeaus posted:I'm trying to do something like this http://www.youtube.com/watch?v=mw2dm5oIN6Q&feature=related (you might want to lower the volume before clicking) in 3D in OpenGL. I'm currently reading up on VBO:s but I'm not sure how to generate the rooms and connect them with corridors. My original idea was to create a box, scale it to make a room and then "carve out" a hole in one of the walls and create a new box and scale that to make a corridor and then put a new room in the end of the corridor etc. Simple answer: Just do it in 2D. Try not to think about these things in a 3D sense. You're not generating the dungeons in 3D, you're drawing them in 3D. Just do exactly what they did in the video, and when it comes to drawing it in 3D, I'm guessing you'd have to either use a voxel system or have cases for what to draw depending on surrounding tiles (or just draw a box on every wall tile, uglier, but easier for working with until you're done).
|
# ? Dec 28, 2011 11:57 |
|
Tw1tchy posted:Simple answer: Just do it in 2D. Try not to think about these things in a 3D sense. You're not generating the dungeons in 3D, you're drawing them in 3D. Just do exactly what they did in the video, and when it comes to drawing it in 3D, I'm guessing you'd have to either use a voxel system or have cases for what to draw depending on surrounding tiles (or just draw a box on every wall tile, uglier, but easier for working with until you're done). Managed to put this together over the day. Felt good to do what I wanted to do(the procedural rooms) instead of fighting with OpenGL.. And now it's back to fighting with OpenGL!
|
# ? Dec 28, 2011 21:56 |
|
Spite posted:The default OpenGL implementation in Win32 doesn't have all the entry points. All the modern stuff is requested from the driver via glwGetProcAddress. So you can break on that and see what it returns. cool, yeah the game i'm trying to break open is actually not so modern.p, It turns I still have too much inexperience with olly, I was able to find referenced strings to a bunch of functions I missed like 'glDrawelementsinstance' as well 'glDrawrangelements' so I have a feeling that might also somethign to do with it and I haven't hooked those yet. Which really helps explain what I saw, because objects being drawn by elements are dynamic objects compared to instances which are going to be static. EDIT: It all makes sense now, they are opengl extension functions being called by wglgetprocaddress, so I'd actually ahve to hook that function first before in order to call a pointer to the instance of the extension of hte function being used by the program and is the reason why it's not a normal function in API. I'm so naive I didn't realize this until now. Although none of it will matter if I can't do recognition. Texture CRC sounds like something I'd think about trying though I'm wondering whether I have to write it myself. Anyway thanks for the help. ickbar fucked around with this message at 21:07 on Dec 29, 2011 |
# ? Dec 29, 2011 14:56 |
|
Have you tried using gDEBugger/glIntercept in addition to ollydbg?
|
# ? Dec 29, 2011 15:31 |
|
Thinking about the possibility of doing volumetric lighting in a voxel engine similiar to Voxatron. It should just be a matter of, per voxel, raytracing towards a light source, and increasing brightness if nothing is in the way. I'm uncertain about how the performance will end up, in say a 256x256x128 room... either way it should turn out interesting/fun!
|
# ? Dec 29, 2011 17:23 |
|
Bisse posted:Thinking about the possibility of doing volumetric lighting in a voxel engine similiar to Voxatron. It should just be a matter of, per voxel, raytracing towards a light source, and increasing brightness if nothing is in the way. I'm uncertain about how the performance will end up, in say a 256x256x128 room... either way it should turn out interesting/fun! http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html The short version is that it renders the scene to an off-screen buffer where anything solid is black and anything not solid is the atmosphere (or sky) color, and then just do a zoom blur filter on that with the center at the sun and blend the result on to your framebuffer. By "zoom blur filter" I mean basically an average of pixels in a sparsely-sampled line between a point and the zoom origin. It doesn't properly handle holes in things if the holes are not visible from the camera, but nobody notices that. OneEightHundred fucked around with this message at 17:44 on Dec 29, 2011 |
# ? Dec 29, 2011 17:29 |
|
OneEightHundred posted:You might have noticed lately that a lot of games are doing crepuscular rays, and the reason they're doing it is because they found a cheap, cheesy, but convincing way:
|
# ? Dec 29, 2011 17:45 |
|
LBP2's siggraph talk on their volumetric lighting here is really good - low res voxel gird with some filtering / bluring. It's also cool they reuse it for lots of different things as well. The only problem I see is the size of the grid, in LBP2 it sounds like they just had a fix grid over their entire world with (I guess) limits on how big their levels could be. I was also thinking about doing something like this but I'm lazy and there's skyrim .
|
# ? Dec 29, 2011 20:04 |
|
ynohtna posted:Have you tried using gDEBugger/glIntercept in addition to ollydbg? GlIntercept is more useful, but didn't work. It worked well on another opengl application, but I guess this one has some prevention measure built in against opengl wrappers. (GlIntercept supposedly wraps wglgetprocaddress to log all opengl function calls somehow).
|
# ? Jan 2, 2012 07:42 |
|
ickbar posted:GlIntercept is more useful, but didn't work. It worked well on another opengl application, but I guess this one has some prevention measure built in against opengl wrappers. Some older games (I know the Quake games did this) directly used LoadLibrary and GetProcAddress to get the OpenGL functions from the DLL since they had to be able to load the other vendor specific mini-driver DLLs as well. Wrapping wglgetprocaddress won't do much good if the game is calling GetProcAddress manually.
|
# ? Jan 3, 2012 00:35 |
|
I'm writing a Quake3 BSP renderer using D3D11 and have run into two problems when trying to implement lightmap support. 1. The lightmaps are packed into the BSP file itself, in raw 24 bit blocks. I've not managed to get D3D to load them properly as shader resources. What's the correct way to use these to create a texture? I've been trying device->CreateTexture2D and then device->CreateShaderResourceView both of which SUCCEEED and return valid pointers, but the image is always black. 2. What's the correct way to pass two sets of TEXCOORDs per vertex? I've got something like this just now, which doesn't give the correct values for the LightmapTex pair, but works for the Tex pair. code:
|
# ? Jan 6, 2012 02:44 |
|
quote:16+12+12 (You should use field-offset macros to avoid this mistake!) OneEightHundred fucked around with this message at 06:30 on Jan 6, 2012 |
# ? Jan 6, 2012 06:15 |
|
D'oh, should've spotted that one. Thanks. As for 2., I've resolved that by creating a PNG in memory from the raw lightmap data and using D3DX11CreateShaderResourceViewFromMemory, which now works.
|
# ? Jan 6, 2012 19:47 |
|
Yeah for the offsets use D3D11_APPEND_ALIGNED_ELEMENT
|
# ? Jan 7, 2012 17:03 |
|
Any performance thoughts on this basic and initial design for a voxel engine in OpenGL: - World divided into 16x16x64 chunks of voxels for cache reasons, game requires only 64voxel height. - Create a vertex array with 17x17x65 vertices, one for each cube corner, this is shared for all chunks. Store in GL. - When a chunk is edited, generate index array and color array for drawn vertices. Delete old arrays, store new ones in GL. - Every frame use vertex+index+color buffer to draw polygons. I'm estimating 6-8 chunks to be updated every frame. I'm also hoping I can use some tricks to generate dynamic lighting, like only updating the color array to make shadows when objects move. Wondering... should I use vertex arrays or VBO's?
|
# ? Jan 11, 2012 11:22 |
|
Vertex arrays are deprecated, use VBOs.
|
# ? Jan 11, 2012 18:16 |
|
|
# ? May 15, 2024 09:03 |
|
Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers.
|
# ? Jan 11, 2012 18:18 |