|
Jewel posted:Line thickness is one of the most annoying graphics things in terms of how simple it seems it should be sadly and there's a few papers on it, it's called "line stroking". Thanks for the links, I'll look it over.
|
# ? Dec 17, 2014 17:55 |
|
|
# ? Jun 3, 2024 21:47 |
How would you go about changing the resolution of the framebuffer you're drawing to? I'm working on a project with a 500x500 viewport, but I need to encode a 10x10 texture with data that I compute on the GPU. I'm still a huge newbie when it comes to OpenGL, so I really can't figure out how to make that work. As far as I can tell, if I make a 10x10 texture and a 10x10 renderbuffer and draw the elements I want to the texture, it just takes the top-left most 10x10 pixels of the 500x500 framebuffer, in stead of drawing the entire scene in 10x10 texels as I would expect. That is to say I bind a 10x10 texture to the framebuffer, draw my elements and get the corner of the scene in stead of the whole scene.
Joda fucked around with this message at 14:48 on Dec 20, 2014 |
|
# ? Dec 20, 2014 13:19 |
|
Joda posted:How would you go about changing the resolution of the framebuffer you're drawing to? I'm working on a project with a 500x500 viewport, but I need to encode a 10x10 texture with data that I compute on the GPU. I'm still a huge newbie when it comes to OpenGL, so I really can't figure out how to make that work. As far as I can tell, if I make a 10x10 texture and a 10x10 renderbuffer and draw the elements I want to the texture, it just takes the top-left most 10x10 pixels of the 500x500 framebuffer, in stead of drawing the entire scene in 10x10 texels as I would expect. That is to say I bind a 10x10 texture to the framebuffer, draw my elements and get the corner of the scene in stead of the whole scene. In my project, I used: code:
|
# ? Dec 20, 2014 19:51 |
NorthByNorthwest posted:In my project, I used: That worked perfectly. Thanks a bunch
|
|
# ? Dec 20, 2014 20:04 |
Does anyone know why the Sublime Text GLSL validation plugin (which uses the ANGLE preprocessor) would say that version 330 is not supported? Is it an issue with my version of OpenGL, or can ANGLE just not be used for non-ES/WebGL code? I swear I try to Google these things, but they don't seem to be very Google friendly questions.
|
|
# ? Dec 26, 2014 17:54 |
|
ANGLE is specifically an OpenGL ES-to-Direct3D implementation
|
# ? Dec 26, 2014 18:26 |
|
I'm having trouble porting some code from linux to windows (which I know basically nothing about). I'm having some trouble with glfwCreateWindow returning null in this code:code:
code:
My questions include: * Is there anything immediately wrong with this code, that it would work on linux but not windows * What information about the windows system do I need to get to start debugging this, and where should I look for it? * I'm actually running windows under a VM (with virtualbox), is that just going to be a bad idea here?
|
# ? Jan 7, 2015 23:23 |
|
fritz posted:* I'm actually running windows under a VM (with virtualbox), is that just going to be a bad idea here? My guess would be that your VM's graphics driver cannot create a 3.3 context. Try commenting out the hints for the GL version and see if it won't create a window for you. I believe glfw will create the window with whatever the highest possible context version is so check out what what it gives you after the fact.
|
# ? Jan 8, 2015 02:13 |
|
nye678 posted:My guess would be that your VM's graphics driver cannot create a 3.3 context. Try commenting out the hints for the GL version and see if it won't create a window for you. I believe glfw will create the window with whatever the highest possible context version is so check out what what it gives you after the fact. That's most likely the case...I was messing around with OpenGL inside Windows VMs on OS X and i think that none were able to generate a context with version > 2.1 (that was a year or so ago)...
|
# ? Jan 8, 2015 04:42 |
|
fritz posted:virtualbox Now there's your problem
|
# ? Jan 8, 2015 07:10 |
Isn't compiling anything low level (e.g. C/C++) for cross-platform compatibility a bad idea on a VM? Like whatever version of gcc or MSVC you're using in the VM would compile for the system it thinks it's on (i.e. whatever system the VM is emulating, including hardware level emulations,) as opposed to having a native version of Windows or Linux run on dual boot.
|
|
# ? Jan 8, 2015 16:55 |
|
Joda posted:Isn't compiling anything low level (e.g. C/C++) for cross-platform compatibility a bad idea on a VM? Like whatever version of gcc or MSVC you're using in the VM would compile for the system it thinks it's on (i.e. whatever system the VM is emulating, including hardware level emulations,) as opposed to having a native version of Windows or Linux run on dual boot. On Windows, no. On Linux only if you use a config script that detects and sets the -march parameter based on whatever processor you have.
|
# ? Jan 8, 2015 17:46 |
|
Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok.
|
# ? Jan 8, 2015 18:56 |
|
fritz posted:Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok. If you try VMWare Fusion or Parallels, at least you will get a context that you can compile GLSL in.
|
# ? Jan 8, 2015 23:03 |
|
fritz posted:Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok. That's probably just the software opengl32.dll that Windows comes with. I haven't looked at VMs in a little while but I've never known them to have serious hardware graphics support.
|
# ? Jan 8, 2015 23:15 |
|
So, Khronos are crowdsourcing the name for the next OpenGL. I'm sure that'll end well.
|
# ? Jan 17, 2015 18:42 |
|
Is it possible to bind a texture in GLSL so that I can, for example, create a buffer with data for a bunch of objects (like tiles or something) with vertex data as well as a texture ID so I can just tell it to draw the buffer once and not have to sort anything by what texture it has?
|
# ? Jan 20, 2015 06:15 |
|
Unsure what you mean but maybe you could use a 3D texture and bind multiple textures as "layers" on that (indexed by the "texture ID"). This/your idea might not be a good idea though.
|
# ? Jan 20, 2015 06:20 |
|
Jewel posted:Unsure what you mean but maybe you could use a 3D texture and bind multiple textures as "layers" on that (indexed by the "texture ID"). This/your idea might not be a good idea though. Sorry I think I wrote the question too hastily. In OpenGL 4, I want to load a single buffer with vertex data for a number of quads (composed of two triangles each) and draw it in one call. The caveat is that I want each quad to have one texture drawn on it, out of a pool of several textures. Can this be done in GLSL with extra buffer data per triangle, or is the recommended method to keep separate buffers for each texture and bind a texture and draw them one by one? BattleMaster fucked around with this message at 06:33 on Jan 20, 2015 |
# ? Jan 20, 2015 06:29 |
|
You want a texture array, not a 3D texture.
|
# ? Jan 20, 2015 06:44 |
|
Whoops, yeah, that's what I meant. Same thing really but no attempt at blending between layers.
|
# ? Jan 20, 2015 06:53 |
|
Thanks guys, I had no idea such a thing existed but it looks perfect, and most hardware seems to support far more textures in an array then I'll have.
|
# ? Jan 20, 2015 10:04 |
|
You can also use bindless textures, but those are a recent-ish feature and aren't really present on older cards.
|
# ? Jan 20, 2015 13:48 |
|
pseudorandom name posted:You want a texture array, not a 3D texture. He could also use a texture atlas. Some people might pull out crucifixes and make hissing sounds, though.
|
# ? Jan 22, 2015 07:11 |
|
Goreld posted:He could also use a texture atlas. Some people might pull out crucifixes and make hissing sounds, though. For some reason I never thought of that even though I'm calculating and feeding texture coordinates to the shader anyway. However since there's a fancypants way of doing it I may as well use it. Also she in spite of my name
|
# ? Jan 22, 2015 09:21 |
|
Goreld posted:He could also use a texture atlas. Some people might pull out crucifixes and make hissing sounds, though. Well, the whole point of a texture array is to not use a texture atlas, so, yes, we'd be sad.
|
# ? Jan 22, 2015 09:23 |
|
pseudorandom name posted:Well, the whole point of a texture array is to not use a texture atlas, so, yes, we'd be sad. Edit: found that it's an OpenGL 3 feature. Unclear whether the GPU in question does OpenGL 3. Is there a table somewhere? roomforthetuna fucked around with this message at 05:18 on Jan 23, 2015 |
# ? Jan 23, 2015 04:04 |
|
It is a Direct3D 10 feature.
|
# ? Jan 23, 2015 04:13 |
|
This seems like a good timed question based on the last page of discussion. I am having issues with OpenGL texture arrays when the source is a single texture atlas image file. The texture atlas is a 1024x1024 RGBA image file of 16x16 tiles, each being 64x64 pixels in size. This is similar to a 64x64 texture atlas file for Minecraft. However, attempting to generate the texture array from a single image file by describing individual tile x,y offsets and width/height has failed, so I definitely doing something wrong. So, I ended up pulling out each individual tile into its own separate file (64x64 RGBA format) with the naming convention of "[Index].png". The texture array is simply made by iterating over each image file, loading it, and adding it to the texture array based on it's index. Example working code (the Bitmap class is just a wrapper for stb_image library and does nothing weird..the images are loaded and working fine): code:
code:
How do I just use the single image file and then just create the texture array by iterating through the list of tiles by (col,row) indexing to create a texture array? Everything I have tried just ends with some horrible random garbage. What I was hoping to do was to use the one texture atlas file for the call to glTexImage3D(...) as the source pixels, then just call glTexSubImage3D with the coordinate offset and width/height of each sub tile to generate the texture array.
|
# ? Jan 25, 2015 02:40 |
|
fritz posted:Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok. I'm way late to this party but but still useful info: out of the box Windows only supports OpenGL 1.1 (well at least up to Win 7 is that way. Haven't tested anything more recent). You'll have to install OpenGL drivers from your graphics card vendor to get anything more recent. VMware Fusion and Parallels do this when they install their integration packages I believe. I got bit by this a bunch at work in the past. quote:Edit: found that it's an OpenGL 3 feature. Unclear whether the GPU in question does OpenGL 3. Is there a table somewhere? Maybe this helps? http://opengl.delphigl.de samiamwork fucked around with this message at 03:59 on Jan 26, 2015 |
# ? Jan 26, 2015 03:53 |
|
I want to apply a repeatable normal map texture to an arbitrary triangle mesh, like say a 3D model of a papercraft and give it a paper-y texture. I've got the model UV unwrapped for things like ambient occlusion baking, but I want the "paper" texture to be nice and uniform with no distortion. Can I use GLSL and texture matrices to do something like that? What math would be involved?
|
# ? Jan 28, 2015 18:02 |
|
HiriseSoftware posted:I want to apply a repeatable normal map texture to an arbitrary triangle mesh, like say a 3D model of a papercraft and give it a paper-y texture. I've got the model UV unwrapped for things like ambient occlusion baking, but I want the "paper" texture to be nice and uniform with no distortion. Can I use GLSL and texture matrices to do something like that? What math would be involved? I think the best approach would be to just index your texture using the model's vertices' coordinates. You could use the x/y y/z and x/z planes to do separate lookups. See example 1-3 here http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html
|
# ? Jan 28, 2015 18:34 |
|
CodeJanitor posted:This seems like a good timed question based on the last page of discussion. glTexSubImage takes a one dimensional array for the input data, the x,y offsets and width/height parameters refer to the destination texture and does not have any effect on the source data. This means you need to reorganize your source pixel data so that a tile's rows are sequential in memory rather than split up as is the case when the texture atlas is loaded. Atlas looks like this in memory | -- Tile 1 Row 1 --| |-- Tile 2 Row 1 --| ... |-- Tile N Row 1 --| |-- Tile 1 Row 2 --| |-- Tile 2 Row 2 --| ... |-- Tile N Row 2 --| ... | -- Tile 1 Row M --| |-- Tile 2 Row M --| ... |-- Tile N Row M --| But glTexSubImage wants | -- Tile 1 Row 1 --| |-- Tile 1 Row 2 --| .. |-- Tile 1 Row M --| One possible methods for resolving this is to manually copy an individual tile's data into a new buffer sized for the tile and upload that to your texture. code:
|
# ? Jan 28, 2015 22:47 |
|
I was poking through "OpenGL Insights" (ch 14)
|
# ? Jan 29, 2015 02:39 |
|
I have a question about VAOs and transform feedback. The code is more or less straight from here http://prideout.net/blog/?p=67 I am kinda new to "new" opengl having mostly worked with 2.1 so I havent really used VAOs much. There are two buffers, ParticleBufferA and ParticleBufferB. I use transform feedback to do the particle location update from BufferA to BufferB, then I swap (the GLuints) so the updated locations in B are referred to in A. From what I have read about VAOs the general case is you bind the VAO, the bind the buffer and set up the vertexAttrib stuff then for subsequent stuff you just need to bind the VAO. My question is: after I do the transform feedback stuff, whats the right way to handle ParticleBufferA and ParticleBufferB being swapped wrt the VAO? At the moment I have 1 VAO, which I set up for each particle update (the advect() function below). Is it possible to make 1 VAO at the start and not update it? Should I instead make 2 VAOs and switch between them? Or does it not really matter that I am re-doing the VAO after every frame? I am using openFrameworks so advectShader is just a vertex shader that moves the particles based on a fixed velocity, and renderParts is a vertex/frag shader that just does the MVP transform and sets a color. The buffer data in the Particles arrays is interleaved with position (3 floats x, y, z), birth time (single float), velocity (3 floats for x, y, z direction). partcnt is the number of particles. code:
I guess ideally I would like to setup my VAO(s) in setup() and just have a call to glBindVertexArray(some_vao_id) in advect() rather than all the glBindBuffer/glEnableVertexAttribArray/glVertexAttribPointer stuff, but yeah not sure what the right way to do it is if I swap the buffers. Maybe it doesn't even matter. I'm trying to get as many particles as possible so would like it to be as performant as possible, if you see any other comments/suggestions.
|
# ? Jan 29, 2015 11:17 |
|
Seems like the ideal thing would be to set up two VAOs and switch between them. It's certainly a waste of time to create a VAO every time you draw, set it up, and then throw it away. I'm not sure what you're trying to do after you're finished with the VAO in advect(). The array_buffer binding is a property of each vertex attribute array pointer in the VAO and they won't be modified by unbinding it. The attribute enables are also properties of the VAO, they're already gone since you unbound it. Also there's a memory leak because you're not deleting the VAO after. Edit: Oh sorry, you only initialise it once. Spatial fucked around with this message at 12:21 on Jan 29, 2015 |
# ? Jan 29, 2015 12:16 |
|
Hey thanks, that makes sense. Yeah I c&p'd most of that from the tutorial and he doesnt use VAOs so I wasnt sure what needed to be done with unbinding as the example does it every frame. For some reason glDrawArrays didnt work in oF unless I used a VAO which is why I added them in. NFI why that was happening I'm sure its something deep in the guts in oF. [edit] also using 2 VAOs did not seem to have a material impact on performace vs rebinding everything each time, maybe 0.5 ms improvement unixbeard fucked around with this message at 09:13 on Jan 30, 2015 |
# ? Jan 30, 2015 08:36 |
|
nye678 posted:glTexSubImage takes a one dimensional array for the input data, the x,y offsets and width/height parameters refer to the destination texture and does not have any effect on the source data. This means you need to reorganize your source pixel data so that a tile's rows are sequential in memory rather than split up as is the case when the texture atlas is loaded. Sorry, I have been really busy for the last couple of weeks, but thank you for the information on how the function handles the data. Wasn't able to find any good explanation of the details and appreciate the help!
|
# ? Feb 5, 2015 03:43 |
|
Are there any recommendations for books teaching modern OpenGL (or DirectX)?
|
# ? Feb 5, 2015 21:54 |
|
|
# ? Jun 3, 2024 21:47 |
|
I recommend the OpenGL SuperBible sixth edition.
|
# ? Feb 6, 2015 00:46 |