Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'll give you a hint, D3D shaders still cause a good deal less hitching when you hot-load them. It's not AS big of an issue now that the GLSL compilers are reasonably fast now.

Of course even standardizing the parser is a bit less than what I think is needed: Stuff like type conversion needs to be standardized as well.

ATI for example will error out if you store an integer LITERAL into a float, and will error out if you try using a float var to drive a for loop. glslvalidate is totally useless because (surprise!) glslvalidate only validates parse failures and the additional validation is a stub that always returns true.

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...

Luminous posted:

Do you really not understand what OneEightHundred is saying? Or are you just trolling him? You should have just stopped at your very first sentence.

If options change that would cause a need for recompilation, then it would be recompiled. However, if not, then keep using the stored built version. I'm not sure why caching appears to be an alien concept to you. Maybe you're just being all goony semantic about it, or something.

I'm coming from the perspective of someone writing the driver, not someone using the API, so I may be making my point poorly.

I'm simply saying that bytecode and cached (shipping on disk) versions aren't all that simple and I don't feel they really save you anything. There's a lot more that happens at the driver and hardware level during shader compilation and linking and it's not widely understood. To really get at the heart of the issue, you'd have to save out a multitude of variants of each shader and you'd have to make it future-compatible. I'm not sure it's worth it, and it's a royal pain in the rear end for the driver developer, that's all.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
There is no shipping of compiled shaders involved. There is no future-compatibility needed because the program can just recompile the shaders if the user has changed video cards since the last time the program was run.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Spite posted:

cached (shipping on disk)
At the heart of this cross-purposes discussion is the fact that that's not what cached means. OneEightHundred is proposing that the card allow one to save the specific compiled shader for the current situation, to avoid unnecessary time spent recompiling. Like, the first time you load a game/area, it compiles the shaders and saves them out so that every other time you load that game/area it won't need to compile them again.

If it would really "recompile shaders based on state to bake stuff in and do other optimizations" then it could still be an option, where an attempt to load a precompiled shader that is flagged as being for a different situation (you could include whatever flags you need since it's a vendor-specific binary being dumped) would return a "can't use that saved shader, please recompile" return value. Or you could even include the uncompiled shader code inside your precompiled shader file, and then in the event of loading it in a situation where the precompilation isn't valid, automatically recompile.

(Does a card really recompile shaders based on state?)

Spite
Jul 27, 2001

Small chance of that...

roomforthetuna posted:

At the heart of this cross-purposes discussion is the fact that that's not what cached means. OneEightHundred is proposing that the card allow one to save the specific compiled shader for the current situation, to avoid unnecessary time spent recompiling. Like, the first time you load a game/area, it compiles the shaders and saves them out so that every other time you load that game/area it won't need to compile them again.

If it would really "recompile shaders based on state to bake stuff in and do other optimizations" then it could still be an option, where an attempt to load a precompiled shader that is flagged as being for a different situation (you could include whatever flags you need since it's a vendor-specific binary being dumped) would return a "can't use that saved shader, please recompile" return value. Or you could even include the uncompiled shader code inside your precompiled shader file, and then in the event of loading it in a situation where the precompilation isn't valid, automatically recompile.

(Does a card really recompile shaders based on state?)

Most drivers do this already behind your back, though. And saving to disk will almost certainly be saving an intermediate format that would have to be retranslated/recompiled once they are actually used anyway. What I mean to say is what the app developer thinks of as "one specific situation" the driver thinks of as "potentially 2 dozen+ possibilities" and that's hard to reconcile easily.

And yes, every driver for every card currently shipping will recompile the active shader based on state. Think about a simple texture sample. What if you change the wrap mode and you don't have support for it in hardware? Your shader is already uploaded and compiled, but the driver has to fixup the shader to get the correct behavior - which causes a recompile.
For example, nvidia does not support seamless cube maps in hardware (the gt100 series does, anything less does not), so if you turn them on and your shader samples a cube map, they modify the shader to do the fixup and blending. Or they will borderize each face when you draw and blend in the shader.

There are dozens and dozens of situations where a driver will take your current shader (that it has told you is already compiled) and recompile it to add/remove stuff based on state. Since the driver doesn't know what's coming it's hard to cache; it would either have to guess at a common set of state or provide all possibilities in its binary blobs. Neither are particularly appealing, and as I said before, it doesn't really gain you a whole lot unless you can avoid all the recompiles and links completely.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Regardless of how much fiddling around it does, it clearly has some sort of representation of the code that it can process into renderable variations in a lot less time than feeding it an entirely new shader.

A lot of this has to do with optimizations that are state independent. CSE and algebraic optimizations, for example.

Mustach
Mar 2, 2003

In this long line, there's been some real strange genes. You've got 'em all, with some extras thrown in.
This sounds like what you want:

quote:

OpenGL 4.1 adds a couple handy features, according to Neil Trevett, president of Khronos. One is the ability to store compiled graphics programs called shaders onto the hard drive so the graphics chip can reload them as needed rather than recreate them

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Thank God. Add the DX11 functionality as well and it'll FINALLY be at feature parity with D3D, and maybe we'll see some development with it again.

WebGL is the real news though: It's a major step towards what Google wants, which is turning web browsers into application development platforms, overthrowing the core operating system in the process. If they're aiming towards high-performance 3D, I suppose it's only a matter of time until we get better input and audio APIs and it becomes possible to write serious games on top of Chrome.

OneEightHundred fucked around with this message at 20:27 on Jul 26, 2010

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

Thank God. Add the DX11 functionality as well and it'll FINALLY be at feature parity with D3D, and maybe we'll see some development with it again.

WebGL is the real news though: It's a major step towards what Google wants, which is turning web browsers into application development platforms, overthrowing the core operating system in the process. If they're aiming towards high-performance 3D, I suppose it's only a matter of time until we get better input and audio APIs and it becomes possible to write serious games on top of Chrome.

Yeah, and my previous posts were mostly an annoyance at having to implement it :)
I still don't think it will save nearly as much time as people hope, but it's good to have the appearance of parity with DX11.

I'm curious to see how they make WebGL secure. ARB_robustness is kind of a pain, and being able to give the GPU arbitrary shader code over the web means you can do nasty things to the user.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I don't see why that's such a big deal. Basically all that's needed to secure OpenGL, as far as I can tell, is to bounds check operations that pull from VBOs/textures and not have batshit retarded shader compilers.

Granted that second point might be a bit much for certain vendors, but if you're going to say "but what if driver writers can't follow the spec that says shader code should never crash the application, even if malformed?" then you're basically saying we'll never have an API that adheres to standards they're already told to adhere to. I'm not sure that's really valid. We already have Managed DirectX for instance, why should it be so hard to make OpenGL pull the same trick?

OneEightHundred fucked around with this message at 15:49 on Jul 27, 2010

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

I don't see why that's such a big deal. Basically all that's needed to secure OpenGL, as far as I can tell, is to bounds check operations that pull from VBOs/textures and not have batshit retarded shader compilers.

Granted that second point might be a bit much for certain vendors, but if you're going to say "but what if driver writers can't follow the spec that says shader code should never crash the application, even if malformed?" then you're basically saying we'll never have an API that adheres to standards they're already told to adhere to. I'm not sure that's really valid. We already have Managed DirectX for instance, why should it be so hard to make OpenGL pull the same trick?

Bounds checking should already be in since they're supposed to throw INVALID_VALUE on that case.

The hard problem to solve is making sure a shader doesn't take so long as to hang the entire system. You could easily do some sort of denial-of-service type attack with a shader - there's only one GPU and if you can peg it, the whole system will screech to a halt.

I'm not super familiar with Managed DirectX, but from what I recall it requires marshalling data across the c# runtime boundary. WebGL will absolutely have to do something similar since it's based on JavaScript bindings. The fun part with be rectifying the garbage-collected, typeless JS stuff with the OpenGL runtime underneath.

HauntedRobot
Jun 22, 2002

an excellent mod
a simple map to my heart
now give me tilt shift
Aigh, what in the hell. Having a fun time trying to ditch immediate mode, and having a hell of a time trying to draw 3 sides of a goddamn cube. Scene is basically set up so I'm looking at the cube corner-on. That means 7 vertices, 4 floats per vertex, elements are setup:

code:
static const GLushort e_cube[] = { 0, 1, 2, 3, 1, 4, 3, 5, 2, 6 };
The first 4 elements are the top face, the next 6 are the front left and front right faces, so as far as I can tell I need to do two calls to render that as two tristrips.

Shaders set up OK and do nothing much. Then in my rendering code (where vpos is the vertex position passed to the vertex shader as an attribute):

code:
glBindBuffer(GL_ARRAY_BUFFER, v_cube_buf);
glVertexAttribPointer(vpos,4,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*4,0);
glEnableVertexAttribArray(vpos);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, e_cube_buf);
glDrawElements(GL_TRIANGLE_STRIP,4,GL_UNSIGNED_SHORT,0);
glDrawElements(GL_TRIANGLE_STRIP,6,GL_UNSIGNED_SHORT,4);
glDisableVertexAttribArray(vpos);
The first call to glDrawElements seems to go OK, as I see the top face but I can't then get the second to work, it looks like it's displaying the wrong face. Anything shine out as obviously wrong there?

Edit: doing just the one DrawElements call with all 10 elements works, but then I am redrawing one of my triangles...

Edit2: FIXED Indices is an offset in bytes, and a short is 2 bytes. That 4 should therefore be an 8. Always the simple things.

HauntedRobot fucked around with this message at 15:06 on Jul 29, 2010

Hammertime
May 21, 2003
stop
So the OpenGL Superbible 5th edition is out.

Contents and a couple of sample chapters are at http://my.safaribooksonline.com/9780132160926

Code samples are at:

code:
svn checkout http://oglsuperbible5.googlecode.com/svn/trunk/ oglsuperbible5-read-only
Obligatory amazon link http://www.m.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617

The question is, is this a decent source for "Modern" OpenGL? I've been perusing the sample chapters on safari books and it looks okish (not an immediate mode book) but I'm a little out of my depth.

Spite
Jul 27, 2001

Small chance of that...
Well, there really is no "modern" opengl since no apps have really be written that use GL4.0+ or even really OpenGL 3.0+. I haven't gone through the 5th edition, but I'd hope it's decent since it's supposed to be approved by the ARB.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Spite posted:

Well, there really is no "modern" opengl since no apps have really be written that use GL4.0+ or even really OpenGL 3.0+. I haven't gone through the 5th edition, but I'd hope it's decent since it's supposed to be approved by the ARB.
I'd think "modern" would be not depending on features that have been deprecated since 3.0 at least, even if you're targeting OpenGL 2

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

I'd think "modern" would be not depending on features that have been deprecated since 3.0 at least, even if you're targeting OpenGL 2

Yeah, most definitely. A part of me dies inside when I see a new piece of code that still uses immediate mode.

Forcing everyone to use vertex array objects, vertex buffers and shaders is the best thing the ARB ever did.

Screeb
Dec 28, 2004

status: jiggled
Immediate mode is handy a lot of the time. Without it some stuff is annoying, like if you just want to quickly test/debug something with a few lines or polygons. And what would be best practice for say rendering animated/dynamic GUI items (ie a bunch of unconnected rectangles with various textures and colours)? It's just so easy to do immediate quads, versus what, updating a vertex array and then uploading it to the GPU, then drawing it? Wouldn't that even be slower (if done on a per-object basis)? And then there's having to manage/keep buffer IDs. Maybe I just suck at it, but it seems like such a hassle. The only time I bother with VBOs and such is for performance. Am I just a scrub who needs to get with the times?

Hammertime
May 21, 2003
stop

Spite posted:

Yeah, most definitely. A part of me dies inside when I see a new piece of code that still uses immediate mode.

Forcing everyone to use vertex array objects, vertex buffers and shaders is the best thing the ARB ever did.

I just ordered my copy. I managed to use a friend's safari books account to skim through the entire book. It's based purely off the 3.x core spec (and is the first book to do so apparently) and immediate mode is nowhere to be seen, hurrah.

Provided it's not full of errors it should be good, they seem to give pretty good coverage to most of the major topics.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Screeb posted:

Immediate mode is handy a lot of the time. Without it some stuff is annoying, like if you just want to quickly test/debug something with a few lines or polygons. And what would be best practice for say rendering animated/dynamic GUI items (ie a bunch of unconnected rectangles with various textures and colours)? It's just so easy to do immediate quads, versus what, updating a vertex array and then uploading it to the GPU, then drawing it? Wouldn't that even be slower (if done on a per-object basis)? And then there's having to manage/keep buffer IDs. Maybe I just suck at it, but it seems like such a hassle. The only time I bother with VBOs and such is for performance. Am I just a scrub who needs to get with the times?
This is what batching is for: You pass an arbitrary geometry batcher some stuff, it adds it to the end of a list of geometry it can dump out in a single call, and then you draw it all at once when you detect a state change or force it.

Spite
Jul 27, 2001

Small chance of that...
Yup, OneEightHundred is correct. Batch that poo poo.

Also re: updating a VBO every frame. When you render in immediate mode you ARE updating geometry every frame. And you are doing it very slowly since you're specifying each vertex individually.

Screeb
Dec 28, 2004

status: jiggled
Hmm, ok, so how would that work in practice? A singleton (or equivalent) batcher class which has a fixed array for holding vertex data, which then gets subbed into one of a bunch of VBOs depending on state? Or would it you just pass it buffer IDs along with state info, which are then rendered in groups (depending state)? Because in the first case you have a (main) RAM penalty for holding all the data, both in the batcher, and for holding the input data to be sent to the batcher (and how would you handle static vs dynamic data?), and in the second case, you still have to mess with managing VBOs for each object, which means the only benefit you're getting over not using the batcher is performance (versus making it easier to render stuff while still using VBOs). Is there another option I'm missing?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Use one dynamic VBO for dynamic data (or two, one for vert data one for indexes). Use glBufferSubData to stream data into it, ideally. When there isn't enough room in the buffer left to add more data, or you hit something that forces a state change, issue a draw call to whatever's in the buffer and start over.

When you "start over", use glBufferData on with a NULL pointer so you get a fresh memory region if the driver is still locking the one you were streaming into. (This mimics the D3D "discard" behavior)

a slime
Apr 11, 2005

If it's best to interleave vertex data, why doesn't glBufferSubData have a stride parameter? It feels wasteful to send 24 bytes of unchanged data per vertex every time I want to update 8 bytes of texture coordinates

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

not a dinosaur posted:

If it's best to interleave vertex data, why doesn't glBufferSubData have a stride parameter? It feels wasteful to send 24 bytes of unchanged data per vertex every time I want to update 8 bytes of texture coordinates
Use two buffers for the vert data, one static one dynamic, if you're concerned about this. In D3D, add another stream source. In OpenGL, bind a different VBO and point one of the attrib/texcoord/whatever pointers at it.

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

Use one dynamic VBO for dynamic data (or two, one for vert data one for indexes). Use glBufferSubData to stream data into it, ideally. When there isn't enough room in the buffer left to add more data, or you hit something that forces a state change, issue a draw call to whatever's in the buffer and start over.

When you "start over", use glBufferData on with a NULL pointer so you get a fresh memory region if the driver is still locking the one you were streaming into. (This mimics the D3D "discard" behavior)

You can alleviate the locking problem by double buffering your VBOs. Use one VBO for even frames and one for odd frames. That also means you don't have to call BufferData, which may cause a slight stall as the driver deallocs and reallocs the memory. Just make sure you overwrite everything you're drawing so you don't get bad data.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
The idea behind that behavior is that you don't need to double-buffer them, calling BufferData with a NULL pointer will give you the same memory region if it's not locked, and if it is locked, will give you a fresh one.

They did this very deliberately, and you can probably trust the driver to be smart about it.

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

The idea behind that behavior is that you don't need to double-buffer them, calling BufferData with a NULL pointer will give you the same memory region if it's not locked, and if it is locked, will give you a fresh one.

They did this very deliberately, and you can probably trust the driver to be smart about it.

But you still may cause an allocation in that case. And if the driver is using the memory to draw from you get better pipelining if you use two. Plus there are other considerations based on whether the driver can do out of line uploads, etc, which hits certain iPhone apps. If you can spare the memory, we always recommend using two.

(Not trying to be contrarian, I swear!)

Eponym
Dec 31, 2007
I'm a beginner, and have 2 questions...

About the coordinates of lines

I have an application where I'm trying to draw lines on a 512x512 texture. The texture is mapped to a screen aligned quad with vertices (-1.0f, -1.0f), (1.0f, -1.0f), (1.0f, 1.0f), (-1.0f, 1.0f).

Suppose I want to draw a line. I don't have any problem drawing lines in world space, but if I want the lines to start and end at particular pixel positions (eg. (48,48) to (72,72)), I get confused. Is it up to me to scale and transform those endpoints so that they are mapped to the appropriate world-space coordinates? Is there anything I can do so that I don't have to perform that transformation?

About the width of lines
I am using OpenGL 2.1, so glLineWidth can be set to values greater than 1 pixel. I read that in OpenGL 3.1, glLineWidth can't be set greater than 1. In that environment, what do people do instead?

Spite
Jul 27, 2001

Small chance of that...

Eponym posted:

I'm a beginner, and have 2 questions...

About the coordinates of lines

I have an application where I'm trying to draw lines on a 512x512 texture. The texture is mapped to a screen aligned quad with vertices (-1.0f, -1.0f), (1.0f, -1.0f), (1.0f, 1.0f), (-1.0f, 1.0f).

Suppose I want to draw a line. I don't have any problem drawing lines in world space, but if I want the lines to start and end at particular pixel positions (eg. (48,48) to (72,72)), I get confused. Is it up to me to scale and transform those endpoints so that they are mapped to the appropriate world-space coordinates? Is there anything I can do so that I don't have to perform that transformation?

About the width of lines
I am using OpenGL 2.1, so glLineWidth can be set to values greater than 1 pixel. I read that in OpenGL 3.1, glLineWidth can't be set greater than 1. In that environment, what do people do instead?

For drawing lines in screen space, the easiest way will be to use an orthographic projection. Check out glOrtho (http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml). You can generate the matrix yourself if you're using a version without the matrix stack. An orthographic project is a parallel projection - parallel lines remain parallel (as opposed to perspective projections, where parallel lines approach a vanishing point). This will let you do your direct mapping to pixels on the screen.

glLineWidth works as you'd expect with antialiasing off. It's different with it on - check the spec for more details.

Jewel
May 2, 2009

So, I started programming in DirectX11 today, I've had experience with coding and whatnot before, but not specifically DirectX, so I went and look at some tutorials. All was going swell, I could initialize a window, set a background color, but then as soon as I tried to define a shader and draw an object, it just crashes on load. This code is extremely simple, and I've put it in a rar so you can tell me what's wrong. I'm coding in Visual C++ Express Edition because I tried for hours yesterday to get OpenGL reference paths set up in Netbeans to no avail, so I'm not trying to get DirectX working there either yet. Here's the code I have currently: http://www.mediafire.com/?i8w1trkx7c03qts

I also tried downloading and just running the provided exe in http://www.dx11.org.uk/3dcube.htm, to test if it's my direct X or something, and that one gives me an error of "Failed to create D3D11 device", which from their code is a failure to complete the "D3D11CreateDeviceAndSwapChain" command. So honestly, what am I doing wrong/what is happening? (And any help on how to set the directX library references up in netbeans would be a great help).

Sorry I'm so retarded :saddowns:

Edit: Well, nobody's helping me here yet, but I still can't figure it out. All I figured out is that "D3DX11CompileFromFile" is working fine, but "dev->CreateVertexShader" is failing.

Edit2: Well, with the help from someone over at StackOverflow, running it in D3D_DRIVER_TYPE_REFERENCE is the only way to make the program work, which it runs perfectly fine in, although incredibly slow. Every other driver type just crashes at the CreateVertexShader step.

Jewel fucked around with this message at 13:17 on Aug 17, 2010

Spite
Jul 27, 2001

Small chance of that...
Do you not have a DX11 card? The reference rasterizer will always work, but as you say, it will be horribly slow.

Jewel
May 2, 2009

Spite posted:

Do you not have a DX11 card? The reference rasterizer will always work, but as you say, it will be horribly slow.

That's the thing, I DO have a DX11 card, and all the other examples in the directx11 sdk work fine. That's why I'm having problems, I have absolutely no idea at all what's wrong.

Spite
Jul 27, 2001

Small chance of that...

Tw1tchy posted:

That's the thing, I DO have a DX11 card, and all the other examples in the directx11 sdk work fine. That's why I'm having problems, I have absolutely no idea at all what's wrong.

That cube app - what's the actual error returned by the Device creation function?

Jewel
May 2, 2009

Spite posted:

That cube app - what's the actual error returned by the Device creation function?

It just had "If (FAILED(blablabla))" and then a popup when failed, but I doubt the cube app even works anymore, I just googled for it and it's probably some extremely old code. I opened and compiled many directX11 examples from the sdk and they worked flawlessly, even something that looks exactly like what I'm trying to run right now, but I still can't work out why mine only works in REFERENCE mode and not HARDWARE mode. I bet it's some sort of tiny oversight or something.

Edit: Welp, I sure am an idiot, my graphics card doesn't support DX11, I'm gonna buy a new one soon. Thanks for trying to help anyway :sigh:

Jewel fucked around with this message at 07:27 on Aug 18, 2010

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe

Tw1tchy posted:

It just had "If (FAILED(blablabla))" and then a popup when failed, but I doubt the cube app even works anymore, I just googled for it and it's probably some extremely old code. I opened and compiled many directX11 examples from the sdk and they worked flawlessly, even something that looks exactly like what I'm trying to run right now, but I still can't work out why mine only works in REFERENCE mode and not HARDWARE mode. I bet it's some sort of tiny oversight or something.

Edit: Welp, I sure am an idiot, my graphics card doesn't support DX11, I'm gonna buy a new one soon. Thanks for trying to help anyway :sigh:

You can still create DX11 device with a lower feature level

Jewel
May 2, 2009

slovach posted:

You can still create DX11 device with a lower feature level

I'd rather wait though, it's just a hassle getting everything to work properly. I needed a new one anyway.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

I've been dinking around with OpenGL on Mac OS X, putting together a simple puzzle game. Right now my game objects are solid rectangles that I just do this for:

glColor3f(1.0, 1.0, 1.0)
glBegin(GL_QUADS)
glVertex3i(x,y, x+height,y+height)
...
glEnd()


So I figure I should add some bitmapped graphics. I found a 'sprite sheet' or whatever you call it on a free game art website. Cool, right? Sort of. I load the file, assign it to a texture. Then I draw a quad, and texture it.

First question:
Is it possible to change the coordinates of the texture I load so that I can specify texture coordinates on a pixel-by-pixel basis?

When I do my glOrtho for my screen, I use the actual dimensions so that when I draw a line from 0,0 to 640,480, it goes all the way across the screen (instead of having to use 0,0, 1,1 or something)

The idea would be that I could pick my sprites from the sprite sheet by just saying glTexCoord2i(1,32), instead of having to multiply the coordinates 1 and 32 by 1/texture_size, and passing those to glCoord2f(.015625, .1875). I guess I could have already wrote and debugged the code to just use those types of values by the time it took me to write this post.

Second question:
What's the easiest way to make the hot pink (or green, or whatever the background color is) transparent?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Bob Morales posted:

Is it possible to change the coordinates of the texture I load so that I can specify texture coordinates on a pixel-by-pixel basis?
What are you trying to do that requires this?

quote:

What's the easiest way to make the hot pink (or green, or whatever the background color is) transparent?
Use the alpha channel, convert pixels that are the chroma key color to a color with zero alpha at load time.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

OneEightHundred posted:

What are you trying to do that requires this?

Here's the file I am using for the textures.



Right now I have to draw a quad and give it the texture coordinates for whichever object on that page I want to draw. So if I wanted to draw that red guy in the corner, I would have to use (3*width, 0), and (4*width, width) for the upper and lower X/Y coords. Not to mention I have to do some more simple math because the corners of the actual texture are 0,0 and 1,1. I would like them to be 0,0 and 128,128 (the actual number of pixels in the texture)

What I want to do at load time is load each piece into it's own texture. So I'd have an array of GLuints and I could just say texture[RED_GUY] or texture[BLUE_BALL] (these would be hardcoded in #define's or in a enum or something)

I want to avoid having to make a separate file for each sprite. I'm new to OpenGL. In the past I would use a 2D sprite library, load the image, then do 'grab_sprite(x,y,x2,y2, &spritedata[this_sprite]) to pick each one out of the main image.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Bob Morales posted:

I would like them to be 0,0 and 128,128 (the actual number of pixels in the texture)
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glScalef(1.0f/(float)textureWidth, 1.0f/(float)textureHeight, 1.0f);

OneEightHundred fucked around with this message at 17:23 on Aug 24, 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply