|
Glimm posted:
Your problem is 'Shape p = *itr;'. This creates a Shape object on the stack that is local to your while block. Any transformation done on p is lost after each iteration and when the while loop ends. What you probably want is 'Shape& p = *itr;'. This creates a reference to the Shape object that exists where-ever that iterator is pointing. Whatever transformation that is applied to p will stick around until the container 'it' comes from is destroyed. Edit: beaten. Although I find that creating a reference can make operations on the object easier to read rather than just manipulating it through the iterator. Whatever works best for you. cliffy fucked around with this message at 22:32 on Apr 23, 2010 |
# ? Apr 23, 2010 22:29 |
|
|
# ? May 14, 2024 01:24 |
|
cliffy posted:Although I find that creating a reference can make operations on the object easier to read rather than just manipulating it through the iterator. Whatever works best for you. Makes sense and if I'm doing more than one call in my loop in the future I'll do this. Thanks!
|
# ? Apr 23, 2010 22:33 |
|
You could also just give the iterator a meaningful name instead. Edit: vvvvvvvvvvvvvvvv Using a pointer is actually even more absurd than using a reference, since the syntax for pointers and the syntax for iterators is intentionally identical. Nippashish fucked around with this message at 00:23 on Apr 24, 2010 |
# ? Apr 23, 2010 22:34 |
|
Edit: beaten several times so never mind. Re-edit: Actually, one of the things I was saying is probably still worth saying - you could also use a pointer rather than a reference, which I think makes it clearer that you're modifying something somewhere else, where a reference is trickier because it's syntactically the same as a copy apart from that one little '&'. But that's purely a matter of taste. code:
roomforthetuna fucked around with this message at 00:18 on Apr 24, 2010 |
# ? Apr 24, 2010 00:13 |
|
The only time you should use references instead of pointers is when something isn't going to change and you COULD copy it, but want to avoid the performance hit of doing so and don't want to poo poo your code up with asterisks. Or when you're overloading an operator that returns an L-value (like indexing). Using references when you'd normally use a pointer though leads only to misery.
|
# ? Apr 24, 2010 00:53 |
|
OneEightHundred posted:Using references when you'd normally use a pointer though leads only to misery. code:
code:
roomforthetuna fucked around with this message at 02:05 on Apr 24, 2010 |
# ? Apr 24, 2010 01:08 |
|
roomforthetuna posted:Yeah, this. And not just because it's more confusing, but because of this: This is only dangerous if you don't understand the semantics of the code you're writing. Expecting the reference to move is a problem with your (rhetorical "your") understanding of the language, not actually a problem with references. This is getting off topic though so I'll drop it...
|
# ? Apr 24, 2010 01:30 |
|
Actually assignable references are a design abortion that serve no purpose outside of L-value operator overloads. References were created so you could use a pointer as if it was a value, not so you could stop using -> and * ("this" isn't a reference for a reason)
|
# ? Apr 24, 2010 02:05 |
|
We're supposed to draw a Hermite spline, but I'm terribly confused. This has helped me a lot (http://www.cubic.org/docs/hermite.htm) but still leaves a lot of gaps. 1.) The four parameters are defined by the user (startpoint, endpoint, starting point tangent, ending point tangent). But how does one find the tangent of a set of points? 2.) What is "s"? It says "Vector S: The interpolation-point and it's powers up to 3:" but that doesn't mean anything to me because at no point is a numerical value assigned to it, nor does it explain how it could be. It also defines it as code:
3.) Another thing that's really confusing is the way it deals with points as vectors. code:
This is just so confusing to me and no matter how much time I spend looking up information, I don't learn anything because so many crucial gaps are unfilled. All I want to do is draw a drat Hermite curve. This is something I could probably understand in like 5 minutes if I had a proper and full explanation.
|
# ? Apr 25, 2010 00:50 |
|
You don't "find" the tangent for a set of points, you have to explicitly define them. The tangent is the direction you want the line to be facing at its start and end points, which obviously has nothing to do with the locations of the points. If you want to see this in action, open ANY draw program that lets you define curves, and note that there are two handles attached to every point you create which let you modify the curvature. "Vector S" is just a set of four values created using the interpolation value. The interpolation value is something you can plug in to compute locations on the spline, using 0 gives you the first point, using 1 gives you the second point, and anything in between gives you somwhere along the spline.
|
# ? Apr 25, 2010 01:42 |
|
OneEightHundred posted:You don't "find" the tangent for a set of points, you have to explicitly define them. The tangent is the direction you want the line to be facing at its start and end points, which obviously has nothing to do with the locations of the points. I'm curious what "The interpolation-point and it's powers up to 3:" might mean. My best guess is that it means the point, the tangent and the rate-of-change-of-tangent, but that would be differentiation not powers.
|
# ? Apr 25, 2010 03:48 |
|
Read the article, in the case of 4 coefficients Hermite interpolation uses the original, squared, and cubed values of the interpolator, that's where that comes from. Tangent direction ON the spline is calculable, yes, but the point is that you can not get the spline AT ALL unless you manually define what the tangents at the start and end points are. If you want to calculate that then unfortunately I'm too tired to think of calculus right now but I'm pretty sure that since it's just a bunch of polynomials added together that you can get the tangent of an arbitrary point on the line by just adding the derivatives of those polynomials together. Polynomial derivatives is some easy calc 101 poo poo: derivative(a*x^b) where a and b are real numbers = a*b*x^(b-1) OneEightHundred fucked around with this message at 06:33 on Apr 25, 2010 |
# ? Apr 25, 2010 06:28 |
|
OneEightHundred posted:Tangent direction ON the spline is calculable, yes, but the point is that you can not get the spline AT ALL unless you manually define what the tangents at the start and end points are. Plorkyeran fucked around with this message at 15:25 on Apr 25, 2010 |
# ? Apr 25, 2010 15:20 |
|
Maybe I misunderstood the question. If the question is "how do you get the tangent of P1 to T1" then the answer is you just subtract (T1 - P1) If the question is "how do you find T1 and T2 given P1 and P2", then the answer is you don't, you have to explicitly define them because they could be anything. Plorkyeran posted:You don't need tangents; you can also just use more points and trivially calculate the tangents (giving you a cardinal spline).
|
# ? Apr 25, 2010 15:47 |
|
OneEightHundred posted:Maybe I misunderstood the question. Yes! This was it, thank you. So if tangents are explicitly defined, then how exactly is the curve drawn? Because I'm still not quite understanding the math. Since S can also be anything from 0 to 1, what if we just want to draw a curve given those 4 points? A set of 4 points are defined, and you've got C. h is already given (as per this site again http://www.cubic.org/docs/hermite.htm). So then just go through that pseudocode loop? I'm still not sure what steps is which seems crucial to drawing the curve, and also if vector p is one set of points, 4 sets of points, or just a number. Thanks!
|
# ? Apr 26, 2010 10:50 |
|
Whilst farting I posted:Yes! This was it, thank you. steps is the number of line segments you use to approximate the curve - more steps means a smoother curve but more processing to calculate it. Vector p is just that - a vector. In this case, it's the next point to draw a line to (the pseudocode uses moveto and lineto but this can easily be replaced with GL_LINES or something if you just want a curve on the screen)
|
# ? Apr 26, 2010 11:50 |
|
So I made a simple 3ds loader (mostly by following this tutorial) for my cool car racing game. I load a 3ds model fine and then I load a .bmp texture file that I turn on before drawing the vertices of the model. But most 3ds (or obj) models that I've found don't come with .bmp textures. Are the textures encoded into the 3ds file itself? What's the usual (and simplest) way for loading a model with a nice texture in OpenGL? It doesn't even have to be in 3ds format, obj is fine too.
|
# ? Apr 26, 2010 14:16 |
|
Whilst farting I posted:So if tangents are explicitly defined, then how exactly is the curve drawn? P(s)=P1*h1(s) + P2*h2(s) + T1*h3(s) + T2*h4(s) for each value of s. Or in pseudocode: code:
The tangent weighting functions h3 and h4 are a little bit more obtuse, but make sense if you look at the special case of a small but non-zero value of s. To first order approximation the weighting functions become h1 = 1 h2 = 0 h3 = s h4 = 0 And the point is calculated approximately as P(s) = P1 + T1*s which is essentially the first-order Taylor series of your function around s=0. There is an analogous Taylor expansion around s=1 using P2, T2. So basically the whole gimmick here that you have a first-order Taylor series approximations of your function at s=0 and s=1 and you are using the weighting factors h* to interpolate between them. PDP-1 fucked around with this message at 15:27 on Apr 26, 2010 |
# ? Apr 26, 2010 15:19 |
|
Bonus posted:So I made a simple 3ds loader (mostly by following this tutorial) for my cool car racing game. I load a 3ds model fine and then I load a .bmp texture file that I turn on before drawing the vertices of the model. But most 3ds (or obj) models that I've found don't come with .bmp textures. Are the textures encoded into the 3ds file itself? What's the usual (and simplest) way for loading a model with a nice texture in OpenGL? It doesn't even have to be in 3ds format, obj is fine too. There really isn't. You'll need to roll your own, or find a library. As for the textures, I'd assume they'd be part of a material list of some sort but I'm not familiar with that file format. (your link doesn't really go into it). It may only contain references to actual texture files. As for loading textures that aren't raw BMP, it's very dependent on your OS. OSX for example has ImageIO that can get the raw bits out.
|
# ? Apr 27, 2010 02:55 |
|
Use FreeImage for image loading, it will spare you from a lot of unneeded rage. As for models: Ideally what you want is interleaved vertex data and an index array, which nearly always requires converting it out of what the model format actually contains. Animation is another adventure entirely with a retarded number of ways to do it. Most model formats are tailored for what they'll be used for, though if you want a general-purpose one, consider MD5 (skeletal) or MD3 (vertex animated or static). Consider looking in to Open Asset Import Library if you don't want to write a bunch of boilerplate for your model loader.
|
# ? Apr 27, 2010 05:22 |
|
zzz posted:steps is the number of line segments you use to approximate the curve - more steps means a smoother curve but more processing to calculate it. Ohh, that makes sense. I had no idea about the smoothness and it seems crucial to any tutorial, so thank you for the explanation. PDP-1 posted:The simple answer is to let s range over [0,1] in however many steps you want to use and then calculate: This gives me a ton of needed information and I think I even understand what I'm coding for now, which should help immensely. Thanks so much for the thoroughness! That's really what I needed.
|
# ? Apr 27, 2010 18:22 |
|
Ok, this question is actually about 2d but I don't think there's a 2d thread so here goes. I'm looking to make my own very basic hardware 2d renderer in VHDL. From what I can tell, the first thing I need to support is rasterization. I'd like to ultimately be able to draw triangles/polygons and apply textures to them with rotation and scaling. Does anyone have suggestions for a good source the explains these sorts of operations? I'm actually looking to try to implement the PSX's graphics hardware, and I'm aware this will be a significant task, but it should also be very fun. Thanks!
|
# ? Apr 27, 2010 21:38 |
|
I think the OpenGL and D3D specs both cover how they handle rasterization (or rather, how compliant implementations are supposed to) so those would be good places to look.
|
# ? Apr 27, 2010 21:52 |
|
OneEightHundred posted:I think the OpenGL and D3D specs both cover how they handle rasterization (or rather, how compliant implementations are supposed to) so those would be good places to look. Perfect, thank you. This brings to my attention that the guides on PSX I have list the commands and formats that the hardware uses but don't mention the rules it uses, so I suppose some tweaking and testing will eventually be required to get it accurate.
|
# ? Apr 27, 2010 21:57 |
|
This has a crapload of info on the PSX: http://gshi.org/eh/documents/psx_documentation_project.pdf The thing doesn't do perspective correct texturing, so you don't have to worry about that. How familiar are you with rasterization in general? If not at all, start with Bresenham's line algorithm. Abrash has a ton of interesting stuff if you can find it. Foley and van Dam has a chapter on Raster algorithms too, but that book is way old.
|
# ? Apr 28, 2010 00:09 |
|
Spite posted:This has a crapload of info on the PSX: I have seen that pdf, but the images were missing in my version, so thank you. Also, thanks for this post, every bit of info I get nudges me closer to my goal. I decided on doing this project next school year with a couple of other people and we'll probably be working on it for the better part of a year before we graduate and go our separate ways. I only just picked it last week so I'm still doing lots of reading. I'm still in the research phase so I haven't got everything figured out just yet. We're actually going to try to get as much of the PSX running on an FPGA as we can. I wasn't sure how much the "GPU" actually handled on the PSX. It seems like at a minimum I have to be able to rasterize a triangle that's been rotated from the texture coordinates. When you say it doesn't do perspective correct texturing, do you mean it doesn't do scaling? If this is in the pdf, then I definitely need to read it closer. Not having to do scaling seems like a huge relief. I know basically nothing about rasterization. I'm a computer engineering major so in general the hardware stuff comes a bit easier to me than software does. I'll be in a course on 3d graphics next semester that will cover rasterization briefly and I've already gone ahead and peeked at the slides for that. I actually did see Bresenham's line algorithm in those slides and I'll make sure I understand it. Dated materials will actually probably work quite well for this project given its age. A big concern I have at the moment is how much floating point I'll have to implement for this. I presume I'll need some because the slopes of the lines aren't going to be integers, even with integer coordinates. I know that the GTE does floating point stuff but that's sort of a different beast. I'm really excited about this because I want a general idea of how hardware can do graphics and this seems like a great way to get acquainted, even if the PSX is incredibly dated by now. Thanks again for the info!
|
# ? Apr 28, 2010 02:41 |
|
Markov Chain Chomp posted:When you say it doesn't do perspective correct texturing, do you mean it doesn't do scaling? As for what perspective correct texturing is: http://en.wikipedia.org/wiki/Texture_mapping#Perspective_correctness Essentially, textures become more dense as they approach more distant vertices, the PlayStation graphics processor is entirely 2D though so it doesn't do that. This is why you see warping on the roads of every PSX game that involves driving. It doesn't support Z-buffering either, software is forced to depth sort before rendering. quote:A big concern I have at the moment is how much floating point I'll have to implement for this.
|
# ? Apr 28, 2010 04:40 |
|
What would you reckon is the best way to sort render states: shaders, uniforms(textures), vbos?
|
# ? Apr 28, 2010 13:55 |
|
heeen posted:What would you reckon is the best way to sort render states: The real solution is to do things that let you avoid render state changes completely. Merging more things into single draw calls, and overarching stuff like deferred lighting.
|
# ? Apr 28, 2010 19:02 |
|
OneEightHundred posted:Generally speaking, shader changes are more expensive but nearly everything else causes a full re-upload of the state on modern hardware. Can you cite anything for those claims? I'd love to read about it more in depth. I thought changing the source of the vertex data wouldn't require a pipeline flush since vertices must be passed by value anyways, whereas shaders and uniform changes require the pipeline to be empty before you can change anything.
|
# ? Apr 28, 2010 19:21 |
|
heeen posted:Can you cite anything for those claims? I'd love to read about it more in depth. It depends on the hardware and driver. Typically, changing the active shader is the most expensive (well, unless you are uploading a large constant buffer or something). The costs are way more apparent on the CPU side than the hardware side in most cases though (because of the validation, etc the runtime has to do. OpenGL is worse than DX in this regard because of all its fixed-function legacy stuff). Many games these days are still CPU bound (especially on the consoles) because stuff isn't batched well or merged well.
|
# ? Apr 28, 2010 19:50 |
|
OneEightHundred posted:Scaling is a necessity of texturing. Thanks again. I'm starting to get a real feel for this. I have a pretty good idea now of how to draw a line and how to determine if a point falls inside a triangle using edge equations. Now I'm trying to do some linear algebra to get the hang of textures and going from (x,y) to (u,v). I had a moment where everything clicked and the sanity checks for doing this quickly and with integer math are starting to work out.
|
# ? Apr 28, 2010 20:02 |
|
Spite posted:OpenGL is worse than DX in this regard because of all its fixed-function legacy stuff (This is really the one area where OpenGL has a performance edge) quote:I thought changing the source of the vertex data wouldn't require a pipeline flush Each draw call in D3D9 essentially IS a pipeline flush though. e: http://developer.nvidia.com/docs/IO/9078/Modern-Graphics-Engine-Design.pdf Kind of old, but covers the basic jist: quote:Question : What is the most expensive render state change? OneEightHundred fucked around with this message at 01:01 on Apr 29, 2010 |
# ? Apr 28, 2010 20:27 |
|
OneEightHundred posted:Generally speaking, shader changes are more expensive but nearly everything else causes a full re-upload of the state on modern hardware. Actually, very few things cause stalls/full state re-uploads nowadays. State management used to be a bigger issue on older hardware, but nowadays there's enough transistors hanging around that the hardware is able to version state changes and "bubble" them through the pipe-line with the rendering work-load. Even changing render targets doesn't matter as much anymore, unless you're trying to re-bind the texture for reading somewhere else in the pipeline (in which case, the hardware will pick up that it's trying to be used in two different ways, and stall until the changes are committed). However, state changes aren't necessarily free. Driver overhead is usually a big culprit -- the DirectX stack is pretty deep, and there are a lot of opportunities for cache misses to occur when you're binding resource views in various parts of memory that need to be made resident, have their ref-counts checked, etc. This turns a semantically simple operation like pushing some textures into the command buffer into hundreds of idle cycles. This is actually the motivation behind the "Bindless Graphics" extensions in OpenGL. Instead of having active state, you just put your texture headers/transform matrices/whatever into a big buffer in memory, give the base pointers to the GPU, and have the shaders fetch what they need from where they need it on the fly. Another thing to concern yourself with is synchronization points. Any time you create resources (and sometimes even when you release them) the CPU and GPU have to sync, idling whichever one was ahead. If you're not careful with how you're mapping your constant buffers (for example) or something like dynamic vertex data, you can end up injecting a lot of unnecessary overhead. Obviously this will vary a little with the under-the-hood hardware and driver implementations of various vendors (ATI seemed to indicate at GDC that changing the Tessellation state in DX11 might be slightly expensive for them, while NVIDIA doesn't seem to have any penalty for example) but this should all be accurate "to first order". Of course, if you're making games that target hardware pre-2008ish, they might not have as robust state versioning functionality, so this will still matter. And if you're using D3D9 (OneEightHundred is definitely right about the improvements in D3D10/D3D11) you definitely still need to be careful. However, in modern APIs/hardware, its becoming less of a concern. Pay attention if you're mixing Compute (DirectCompute/OpenCL/CUDA) and Graphics (Direct3D or OpenGL), though. Hubis fucked around with this message at 03:47 on Apr 29, 2010 |
# ? Apr 29, 2010 03:44 |
|
And whilst all this is 'good practice' if you are writing a little engine to play around with at home it's mostly overkill and it's probably better spending time writing features than optimising the poo poo out of context changes / synchronisation. Not having a go, this is really important stuff. Just trying to say that if you are just learning to program graphics there are better ways to spend your time.
|
# ? Apr 30, 2010 01:40 |
|
The page on Bindless Graphics says it's supported on G80 (8800?) and higher. Would you expect it to be supported on Ion, which is based on the 9400?
|
# ? May 2, 2010 04:14 |
|
Yeah, I *believe* so. The hardware numbers all get messy around G80. Definitely on Ion2.
|
# ? May 2, 2010 17:50 |
|
I'm a beginner and am writing a simple engine to display simple geometric shapes - no textures needed. I want to draw points as X's on the screen - am I right in thinking I now need to look into point sprites and textures for this, or is there a simply way to achieve this?
|
# ? May 4, 2010 06:29 |
|
adante posted:I'm a beginner and am writing a simple engine to display simple geometric shapes - no textures needed. I want to draw points as X's on the screen - am I right in thinking I now need to look into point sprites and textures for this, or is there a simply way to achieve this? Just draw two lines for every point: one from (-1,-1) to (1,1) and one from (-1,1) to (1,-1). Scale as needed.
|
# ? May 4, 2010 09:29 |
|
|
# ? May 14, 2024 01:24 |
|
I'm trying to work out how to draw terrain as a crossover pattern of triangles, like so:code:
I'm also wondering if there's a better method, I worked out that I'd have 6 extra triangles for every four tiles, but my vertices would go from 24 to 16. (Plus a bit more per row, but that's probably set.) Are the extra degenerate triangles generally going to be less costly then the extra vertices from having the triangles be specified as TRIANGLES rather than TRIANGLE_STRIP? Let's assume the terrain mesh is fairly beefy, somewhere on the order of a couple thousand height points square.
|
# ? May 4, 2010 22:50 |