Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

HolaMundo posted:

Thanks for the answers.
What I'm doing now is have two separate shaders (one for texture mapping, the other for normal mapping) and switch between them depending on what I want to render.
You can do that without having two copies of the shared code, something like this:

code:
struct VS_OUTPUT {
  //whatever values are passed to the pixel shader
};
VS_OUTPUT MyVertexShader(float4 pos : POSITION,
                         float3 normal : NORMAL,
                         //whatever other values are in any of the vertices
                         uniform bool usenormals) 
{
  if (usenormals) {
   //do the normal-parsing part
  }
  //do everything else
}

technique VSWithNormals
{
  pass P0
  {
    VertexShader = compile vs_2_0 MyVertexShader(true);
    PixelShader = compile ps_2_0 WhateverFunction();
  }
}

technique VSWithoutNormals
{
  pass P0
  {
    VertexShader = compile vs_2_0 MyVertexShader(false);
    PixelShader = compile ps_2_0 WhateverFunction();
  }
}
This way it compiles out exactly the same as if it's two separate functions, one with the normal parsing and one without.

You can also, though I'm not sure if it's a horrible idea that you shouldn't do, have a shader variable that you set which determines which version of the shader function to call from a single technique - I've done that something like this:
code:
VertexShader MyVSArray[4]={
		compile vs_2_0 MyVertexShader(false,false),
		compile vs_2_0 MyVertexShader(false,true),
		compile vs_2_0 MyVertexShader(true,false),
		compile vs_2_0 MyVertexShader(true,true)
};

technique SwitchedShader {
  pass P0
  {
    VertexShader = (MyVSArray[MyVariable]);
    PixelShader = compile ps_2_0 WhateverFunction();
  }
};

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...

zzz posted:

I haven't touched GPU stuff in a while, but I was under the impression that static branches based on global uniform variables will be optimized away by all modern compilers/drivers and never get executed on the GPU, so it wouldn't make a significant difference either way...?

Best way to find out is benchmark both, I guess :)

You'd hope so, but I wouldn't assume that! The vendors do perform various crazy optimizations based on the data. I've seen a certain vendor attempt to optimize 0.0 passed in as a uniform by recompiling the shader and making that a constant. Doesn't always work so well when those values are part of an array of bone transforms, heh.

Basically, you don't want to branch if you can avoid it. Fragments are executed in groups, so if you have good locality for your branches (ie, all the fragments in a block take the same branch) you won't hit the nasty miss case.

nolen
Apr 4, 2004

butts.
Cross-posting this from the Mac OS/iOS Apps thread.

I'm trying to do something with cocos2d, but have been told that it will involve some OpenGL ES work to achieve what I'm looking to do.

This is just a mockup I whipped up in my photo editor.



Let's say I want to load in an image like the one of the left but alter it at runtime to look like the one on the right.

I have no idea where to start with plotting texture points to a 2D polygon and all that jazz. Any suggestions or simple examples that would help?

Spite
Jul 27, 2001

Small chance of that...

nolen posted:

Cross-posting this from the Mac OS/iOS Apps thread.

I'm trying to do something with cocos2d, but have been told that it will involve some OpenGL ES work to achieve what I'm looking to do.

This is just a mockup I whipped up in my photo editor.



Let's say I want to load in an image like the one of the left but alter it at runtime to look like the one on the right.

I have no idea where to start with plotting texture points to a 2D polygon and all that jazz. Any suggestions or simple examples that would help?

Split your quad into 2 quads. Then look into an affine transform.

Also cocos2D is one of the worst pieces of software known to man (not helpful, I know...)

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

HolaMundo posted:

How should I handle having multiple shaders?
For example, I now have a normal mapping shader (which also handles texture mapping), but let's say I want to render something else with a texture but no normal map, how would I do that? Having another shader just for texture mapping seems stupid.
Also thought about having a bool to turn normal mapping on or off but it doesn't seem right either.

The ideal solution is to use the pre-processor to #ifdef out sections of the code based corresponding to different features, then pass defines to the compiler as macros and generate all the permutations you might need.

However, it's a lot simpler (and practically as good) to just place the code in branches, and branch based on bool parameters from constant buffers. So long as the branches are based completely on constant buffer values you shouldn't see any problem. This solution is almost as good as using defines on newer hardware; on older hardware (GeForce 7000-era) you might see some slightly slower shader loading/compilation time, but almost certainly not noticeable unless you're doing lots of streaming content.

zzz posted:

I haven't touched GPU stuff in a while, but I was under the impression that static branches based on global uniform variables will be optimized away by all modern compilers/drivers and never get executed on the GPU, so it wouldn't make a significant difference either way...?

Best way to find out is benchmark both, I guess :)

Spite posted:

You'd hope so, but I wouldn't assume that! The vendors do perform various crazy optimizations based on the data. I've seen a certain vendor attempt to optimize 0.0 passed in as a uniform by recompiling the shader and making that a constant. Doesn't always work so well when those values are part of an array of bone transforms, heh.

Basically, you don't want to branch if you can avoid it. Fragments are executed in groups, so if you have good locality for your branches (ie, all the fragments in a block take the same branch) you won't hit the nasty miss case.

this should work, so long as the static branch is a bool.

Woz My Neg rear end posted:

It's almost always preferable to a true conditional to run the extra calculations in all cases and multiply the result by 0 if you don't want it to contribute to the fragment.

This is the opposite of true; do not do this.

Hubis fucked around with this message at 02:31 on Dec 6, 2011

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
I am trying to render some very basic sprites with SlimDX. Everything works fine, I have a simple shader which basically just passes values through. In the shader, my texture sampler has AddressU and AddressV set to 'clamp', but I'm seeing 'bleeding' on the edges of my sprite. That is, if a pixel on the far right edge is black, you can see black bleeding into the empty space on the left at the same Y values.

1) I don't see why this is happening at all, even if I had it set to 'wrap', the coords are (0,0) and (1,1)
2) This only started happening when I added an alpha channel to the texture. Before it was just a character on a white background, now the white background has been replaced with transparency.
3) The texture is not a power of 2, it's something like 23x46 (it's just a test sprite). I am under the impression that power of 2 textures aren't really required anymore...plus, as I said, this worked before transparency.

I can post some code later tonight, but does anyone have any ideas just from seeing these symptoms?

I have tried debugging the bleeding pixels with PIX and the tex2d call is returning the erroneous values.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Orzo posted:

I am trying to render some very basic sprites with SlimDX. Everything works fine, I have a simple shader which basically just passes values through. In the shader, my texture sampler has AddressU and AddressV set to 'clamp', but I'm seeing 'bleeding' on the edges of my sprite. That is, if a pixel on the far right edge is black, you can see black bleeding into the empty space on the left at the same Y values.

1) I don't see why this is happening at all, even if I had it set to 'wrap', the coords are (0,0) and (1,1)
2) This only started happening when I added an alpha channel to the texture. Before it was just a character on a white background, now the white background has been replaced with transparency.
3) The texture is not a power of 2, it's something like 23x46 (it's just a test sprite). I am under the impression that power of 2 textures aren't really required anymore...plus, as I said, this worked before transparency.

I can post some code later tonight, but does anyone have any ideas just from seeing these symptoms?

I have tried debugging the bleeding pixels with PIX and the tex2d call is returning the erroneous values.
Usually this is a texture data underrun. In other words, you might have either loaded the texture or uploaded the texture from a location 1 pixel worth of data earlier than you should have. Commonly caused by forgetting to parse a 32-bit value out of a header.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
Thanks, I'll look into that. Actually, this leads me to another question which I bet is related. I was doing some testing of this issue by modifying the image, saving it, and re-running my program. And I'd get really really weird results where the result image was the previous image overlapped with the new one (at this point the old one didn't even exist on disk anymore!), and I thought it was some sort of weird graphics memory caching thing.

For example, the original image was a picture of Samus from Super Metroid. I replaced it with a white square. The result was samus with a white square blended together.

Is it possible for things like this to happen due to the underrun?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Orzo posted:

Is it possible for things like this to happen due to the underrun?
No, but it may be the case that you did something weird like only modify one mipmap level, depending on how your loading process works. Are you editing DDS files directly by any chance?

e: Actually that's another distinct possibility: A lot of mipmap generation filters like bicubic have sampling areas larger than just a 2x2 box from the previous level, so you can potentially get spillover from the other edge of the image if it's assuming that the texture tiles.

OneEightHundred fucked around with this message at 00:39 on Dec 7, 2011

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
I just wanted to report that the problem went away when I changed the dimensions to powers of 2. I thought that wasn't a requirement anymore...what gives?

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Orzo posted:

I am trying to render some very basic sprites with SlimDX. Everything works fine, I have a simple shader which basically just passes values through. In the shader, my texture sampler has AddressU and AddressV set to 'clamp', but I'm seeing 'bleeding' on the edges of my sprite.

If you're using DirectX9, then this will explain the problem and how to solve it.

piratepilates
Mar 28, 2004

So I will learn to live with it. Because I can live with it. I can live with it.



I don't know how much you guys know about path tracing and teh rendering equation but I've got a question about it:

In all of the simple algorithms for path tracing using lots of monte carlo samples that I see in lecture notes the tracing function of the algorithm randomly chooses between returning with the emitted value for the current surface and continuing by tracing another ray from that surface's hemisphere (for example in the slides here). Like so:
code:
TracePath(p, d) returns (r,g,b) [and calls itself recursively]:
    Trace ray (p, d) to find nearest intersection p’
    Select with probability (say) 50%:
        Emitted:
            return 2 * (Le_red, Le_green, Le_blue) // 2 = 1/(50%)
        Reflected: 
             generate ray in random direction d’
             return 2 * fr(d ->d’) * (n dot d’) * TracePath(p’, d’)
Why does it randomly choose between returning with the emissive element and continuing the path? Is this just a way of using russian roulette to terminate paths without bias? Surely it would make more sense to count the emissive and reflective properties for all ray paths together and use russian roulette just to decide whether to continue tracing or not.

Also why do the algorithms that don't use this choosing between emission and reflective components instead only count the first emissive element? The rendering equation for a point is the emissive element plus the integral of the incoming light, which is itself the emissive element for a point and the integral of incoming light of other surfaces, so why not count it for each bounce of each path?

Seat Safety Switch
May 27, 2008

MY RELIGION IS THE SMALL BLOCK V8 AND COMMANDMENTS ONE THROUGH TEN ARE NEVER LIFT.

Pillbug
I'm using SlimDX with Direct3D 10.1 and I created a 2D 32x32 texture of format A8_UNorm set up for usage as a shader resource and flagged for write access. When I map the texture to get at a stream to write into it (using WriteDiscard and mip level 0), the stream's stated Length is 2048 bytes long which doesn't make any sense to me (assuming that the 8 in A8_UNorm means 8 bits, 32 * 32 * 1 = 1024).

The DataRectangle that I get from the mapping says that the pitch is 64 as well. It does the same thing if I use R8_UNorm. R32_Float says 4 bytes per texel, which is correct.

Any idea why the stream seems to be twice as long as it should be when I'm using one of the *8_UNorm formats? I'm relatively new to SlimDX (and D3D in general) so perhaps I've made some bad decisions here.

code:
Texture2DDescription textureDescription = new Texture2DDescription
{
  Width = 32,
  Height = 32,
  Format = SlimDX.DXGI.Format.A8_UNorm,
  MipLevels = 1,
  ArraySize = 1,
  SampleDescription = new SlimDX.DXGI.SampleDescription(1, 0),
  Usage = ResourceUsage.Dynamic,
  CpuAccessFlags = CpuAccessFlags.Write,
  BindFlags = BindFlags.ShaderResource,
  OptionFlags = ResourceOptionFlags.None
};
_texture = new Texture2D(_device, textureDescription);
code:
textureStream = _texture.Map(0, MapMode.WriteDiscard, MapFlags.None).Data;

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
It could be that you're requesting one texture format and it's giving you the closest thing it knows of that matches. I don't know much about D3D, but I'm pretty sure that's possible in OpenGL.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ZombieApostate posted:

It could be that you're requesting one texture format and it's giving you the closest thing it knows of that matches. I don't know much about D3D, but I'm pretty sure that's possible in OpenGL.
This wouldn't happen because D3D10 mandates that everything in the spec is supported.

Seat Safety Switch
May 27, 2008

MY RELIGION IS THE SMALL BLOCK V8 AND COMMANDMENTS ONE THROUGH TEN ARE NEVER LIFT.

Pillbug
I tried the identical code on my ATI card at home as opposed to my Quadro at work, and it returns a stream of the proper size and pitch. I'm really confused, so I'm just using R32_Float for now. Thanks anyway :)

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
If I had to guess, the hardware hardware probably wants 64 bytes per row alignment for some reason. It's perfectly within spec, the entire point of pitch is that it's the actual number of bytes per row with any necessary alignment padding, not just a repeat of width * pixel size.

I've never heard of it being as high as 64, but you should respect it regardless as it does come in to play with things like RGB8 textures at low resolutions.

OneEightHundred fucked around with this message at 06:11 on Dec 17, 2011

ickbar
Mar 8, 2005
Cannonfodder #35578
I apologize complete Noob here, don't have too much experience with OpenGl but i'm trying to make a hack for an game using a proprietary Opengl engine with no available SDK or source-code to download.

I'm able to successfully Detour OpenGL functions, and have been using glDisable(GL_TEXTURE_2D) under functions glBegin(), glDrawElements, glDrawArrays to check what objects are being rendered with what commands. These are the only drawing functions that I see on the list of imported opengl functions in the game in Ollydbg. Interestingly Deletelists is referenced in Olly, but no corresponding call to functions related to Displaylist creation.

It's successfully disabled all model textures, except for the ground, foliage which still have textures enabled, trees and certain static objects .

I don't understand how they are still being drawn even though I think i've detoured all the drawing functions. I'm not sure what i'm doing wrong, missing here, or whether Olly is not displaying the entire list of gl functions being used for some reason.

Any input and ideas from OpenGL guru's here on what's going on would appreciated.

ickbar fucked around with this message at 10:59 on Dec 25, 2011

shodanjr_gr
Nov 20, 2007

ickbar posted:

I apologize complete Noob here, don't have too much experience with OpenGl but i'm trying to make a hack for an game using a proprietary Opengl engine with no available SDK or source-code to download.

I'm able to successfully Detour OpenGL functions, and have been using glDisable(GL_TEXTURE_2D) under functions glBegin(), glDrawElements, glDrawArrays to check what objects are being rendered with what commands. These are the only drawing functions that I see on the list of imported opengl functions in the game in Ollydbg. Interestingly Deletelists is referenced in Olly, but no corresponding call to functions related to Displaylist creation.

It's successfully disabled all model textures, except for the ground, foliage which still have textures enabled, trees and certain static objects .

I don't understand how they are still being drawn even though I think i've detoured all the drawing functions. I'm not sure what i'm doing wrong, missing here, or whether Olly is not displaying the entire list of gl functions being used for some reason.

Any input and ideas from OpenGL guru's here on what's going on would appreciated.

Shot in the dark here but maybe they are using some extension wrangler (like GLEW) to get access to various API entry points? (e: thus mangling up the symbols/names)

Bisse
Jun 26, 2005

:dumbpost:

Bisse fucked around with this message at 22:31 on Dec 25, 2011

ickbar
Mar 8, 2005
Cannonfodder #35578
thanks for the input, I thought it over and I guess it doesn't matter as I think I could disable glDrawelements once i konw it's drawing the texture I want to disable.

All that leave is preforming model recognition w/o having an sdk or open source material to look at. Which leaves either texture crc recognition or using asm to find texturename at run-time execution. Which really sucks, since i've been learning for only a few days and will need to spend a year learning how win32 code executes in low level assembly to get to that point.

Spite
Jul 27, 2001

Small chance of that...
The default OpenGL implementation in Win32 doesn't have all the entry points. All the modern stuff is requested from the driver via glwGetProcAddress. So you can break on that and see what it returns.

Or you can use one of the various tracing interposers to get a call trace and see what it's doing.

When I've done similar stuff to what you are describing I've taken a CRC of the texture when it's passed in to glTexture2D and recorded the id that's bound to that unit. Then you can store that away and do whenever you want when it's bound again.

Claeaus
Mar 29, 2010
I'm trying to do something like this http://www.youtube.com/watch?v=mw2dm5oIN6Q&feature=related (you might want to lower the volume before clicking) in 3D in OpenGL. I'm currently reading up on VBO:s but I'm not sure how to generate the rooms and connect them with corridors. My original idea was to create a box, scale it to make a room and then "carve out" a hole in one of the walls and create a new box and scale that to make a corridor and then put a new room in the end of the corridor etc.

Doing it in 2D seems easier, just using tiles and switching out wall-tiles to floor-tiles to make doors and corridors. But how should I do it in 3D?

Jewel
May 2, 2009

Claeaus posted:

I'm trying to do something like this http://www.youtube.com/watch?v=mw2dm5oIN6Q&feature=related (you might want to lower the volume before clicking) in 3D in OpenGL. I'm currently reading up on VBO:s but I'm not sure how to generate the rooms and connect them with corridors. My original idea was to create a box, scale it to make a room and then "carve out" a hole in one of the walls and create a new box and scale that to make a corridor and then put a new room in the end of the corridor etc.

Doing it in 2D seems easier, just using tiles and switching out wall-tiles to floor-tiles to make doors and corridors. But how should I do it in 3D?

Simple answer: Just do it in 2D. Try not to think about these things in a 3D sense. You're not generating the dungeons in 3D, you're drawing them in 3D. Just do exactly what they did in the video, and when it comes to drawing it in 3D, I'm guessing you'd have to either use a voxel system or have cases for what to draw depending on surrounding tiles (or just draw a box on every wall tile, uglier, but easier for working with until you're done).

Claeaus
Mar 29, 2010

Tw1tchy posted:

Simple answer: Just do it in 2D. Try not to think about these things in a 3D sense. You're not generating the dungeons in 3D, you're drawing them in 3D. Just do exactly what they did in the video, and when it comes to drawing it in 3D, I'm guessing you'd have to either use a voxel system or have cases for what to draw depending on surrounding tiles (or just draw a box on every wall tile, uglier, but easier for working with until you're done).

Managed to put this together over the day.

Felt good to do what I wanted to do(the procedural rooms) instead of fighting with OpenGL..

And now it's back to fighting with OpenGL!

ickbar
Mar 8, 2005
Cannonfodder #35578

Spite posted:

The default OpenGL implementation in Win32 doesn't have all the entry points. All the modern stuff is requested from the driver via glwGetProcAddress. So you can break on that and see what it returns.

Or you can use one of the various tracing interposers to get a call trace and see what it's doing.

When I've done similar stuff to what you are describing I've taken a CRC of the texture when it's passed in to glTexture2D and recorded the id that's bound to that unit. Then you can store that away and do whenever you want when it's bound again.

cool, yeah the game i'm trying to break open is actually not so modern.p, It turns I still have too much inexperience with olly, I was able to find referenced strings to a bunch of functions I missed like 'glDrawelementsinstance' as well 'glDrawrangelements' so I have a feeling that might also somethign to do with it and I haven't hooked those yet. Which really helps explain what I saw, because objects being drawn by elements are dynamic objects compared to instances which are going to be static.

EDIT: It all makes sense now, they are opengl extension functions being called by wglgetprocaddress, so I'd actually ahve to hook that function first before in order to call a pointer to the instance of the extension of hte function being used by the program and is the reason why it's not a normal function in API. I'm so naive I didn't realize this until now.

Although none of it will matter if I can't do recognition. Texture CRC sounds like something I'd think about trying though I'm wondering whether I have to write it myself. Anyway thanks for the help.

ickbar fucked around with this message at 21:07 on Dec 29, 2011

ynohtna
Feb 16, 2007

backwoods compatible
Illegal Hen
Have you tried using gDEBugger/glIntercept in addition to ollydbg?

Bisse
Jun 26, 2005

Thinking about the possibility of doing volumetric lighting in a voxel engine similiar to Voxatron. It should just be a matter of, per voxel, raytracing towards a light source, and increasing brightness if nothing is in the way. I'm uncertain about how the performance will end up, in say a 256x256x128 room... either way it should turn out interesting/fun!

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Bisse posted:

Thinking about the possibility of doing volumetric lighting in a voxel engine similiar to Voxatron. It should just be a matter of, per voxel, raytracing towards a light source, and increasing brightness if nothing is in the way. I'm uncertain about how the performance will end up, in say a 256x256x128 room... either way it should turn out interesting/fun!
You might have noticed lately that a lot of games are doing crepuscular rays, and the reason they're doing it is because they found a cheap, cheesy, but convincing way:

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html

The short version is that it renders the scene to an off-screen buffer where anything solid is black and anything not solid is the atmosphere (or sky) color, and then just do a zoom blur filter on that with the center at the sun and blend the result on to your framebuffer. By "zoom blur filter" I mean basically an average of pixels in a sparsely-sampled line between a point and the zoom origin.

It doesn't properly handle holes in things if the holes are not visible from the camera, but nobody notices that.

OneEightHundred fucked around with this message at 17:44 on Dec 29, 2011

Bisse
Jun 26, 2005

OneEightHundred posted:

You might have noticed lately that a lot of games are doing crepuscular rays, and the reason they're doing it is because they found a cheap, cheesy, but convincing way:

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html

The short version is that it renders the scene to an off-screen buffer where anything solid is black and anything not solid is the atmosphere (or sky) color, and then just do a zoom blur filter on that with the center at the sun and blend the result on to your framebuffer.

It doesn't properly handle holes in things if the holes are not visible from the camera, but nobody notices that.
That looks nicer than you'd think it'd do for such a horribly cheap trick! Although, that only works if the light is directed toward the camera. Obvious example is a dark corridor with a light above a fan. The lights in this image are obviously done using a cheap trick, but it's the effect i'm thinking about.

ani47
Jul 25, 2007
+
LBP2's siggraph talk on their volumetric lighting here is really good - low res voxel gird with some filtering / bluring. It's also cool they reuse it for lots of different things as well.

The only problem I see is the size of the grid, in LBP2 it sounds like they just had a fix grid over their entire world with (I guess) limits on how big their levels could be. I was also thinking about doing something like this but I'm lazy and there's skyrim :(.

ickbar
Mar 8, 2005
Cannonfodder #35578

ynohtna posted:

Have you tried using gDEBugger/glIntercept in addition to ollydbg?

GlIntercept is more useful, but didn't work. It worked well on another opengl application, but I guess this one has some prevention measure built in against opengl wrappers.

(GlIntercept supposedly wraps wglgetprocaddress to log all opengl function calls somehow).

The_Franz
Aug 8, 2003

ickbar posted:

GlIntercept is more useful, but didn't work. It worked well on another opengl application, but I guess this one has some prevention measure built in against opengl wrappers.

(GlIntercept supposedly wraps wglgetprocaddress to log all opengl function calls somehow).

Some older games (I know the Quake games did this) directly used LoadLibrary and GetProcAddress to get the OpenGL functions from the DLL since they had to be able to load the other vendor specific mini-driver DLLs as well. Wrapping wglgetprocaddress won't do much good if the game is calling GetProcAddress manually.

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
I'm writing a Quake3 BSP renderer using D3D11 and have run into two problems when trying to implement lightmap support.

1. The lightmaps are packed into the BSP file itself, in raw 24 bit blocks. I've not managed to get D3D to load them properly as shader resources. What's the correct way to use these to create a texture? I've been trying device->CreateTexture2D and then device->CreateShaderResourceView both of which SUCCEEED and return valid pointers, but the image is always black.

2. What's the correct way to pass two sets of TEXCOORDs per vertex? I've got something like this just now, which doesn't give the correct values for the LightmapTex pair, but works for the Tex pair.

code:

struct VS_INPUT
{
  float4 Pos : POSITION;
  float3 Norm : NORMAL;
  float2 Tex : TEXCOORD0;
  float2 LightmapTex : TEXCOORD1;
};

D3D11_INPUT_ELEMENT_DESC layout[] = 
{
  { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
  { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 16, D3D11_INPUT_PER_VERTEX_DATA, 0 },
  { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 16+12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
  { "TEXCOORD", 1, DXGI_FORMAT_R32G32_FLOAT, 0, 16+12+12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

I've hacked in a lightmap-only rendering mode using the Tex coords to hold the lightmap values and loading the lightmaps as PNG after converting the TGAs saved previously, so I know data and coordinates are correct, I just can't convince D3D.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

quote:

16+12+12
TEXCOORD 0 is 2 floats, not 3, should be 16+12+8

(You should use field-offset macros to avoid this mistake!)

OneEightHundred fucked around with this message at 06:30 on Jan 6, 2012

MarsMattel
May 25, 2001

God, I've heard about those cults Ted. People dressing up in black and saying Our Lord's going to come back and save us all.
D'oh, should've spotted that one. :downs: Thanks.

As for 2., I've resolved that by creating a PNG in memory from the raw lightmap data and using D3DX11CreateShaderResourceViewFromMemory, which now works.

High Protein
Jul 12, 2009
Yeah for the offsets use D3D11_APPEND_ALIGNED_ELEMENT

Bisse
Jun 26, 2005

Any performance thoughts on this basic and initial design for a voxel engine in OpenGL:

- World divided into 16x16x64 chunks of voxels for cache reasons, game requires only 64voxel height.
- Create a vertex array with 17x17x65 vertices, one for each cube corner, this is shared for all chunks. Store in GL.
- When a chunk is edited, generate index array and color array for drawn vertices. Delete old arrays, store new ones in GL.
- Every frame use vertex+index+color buffer to draw polygons.

I'm estimating 6-8 chunks to be updated every frame. I'm also hoping I can use some tricks to generate dynamic lighting, like only updating the color array to make shadows when objects move.

Wondering... should I use vertex arrays or VBO's?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Vertex arrays are deprecated, use VBOs.

Adbot
ADBOT LOVES YOU

haveblue
Aug 15, 2005



Toilet Rascal
Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply