|
Suspicious Dish posted:Compare with: C++ code:
|
# ? May 8, 2013 18:45 |
|
|
# ? May 23, 2024 18:14 |
|
Orzo posted:I don't follow. What would be an example of something that doesn't use 'x/y' or 'x/y/z'? Anything with splines or tensor surfaces could use parameterizations (ie. a 't' variable).
|
# ? May 8, 2013 18:50 |
|
Sorry, I don't claim to know C++ conventions. All I know is in that in C it would be:C code:
|
# ? May 8, 2013 18:51 |
|
OneEightHundred posted:Don't use transform-in-place in C++. What's your reasoning?
|
# ? May 8, 2013 19:17 |
|
Splat posted:What's your reasoning? The only drawback is that it can poo poo up things if value creation/destruction is expensive (i.e. things that are dynamically allocated), but C++11 solves that problem, and you shouldn't be running into that with fixed-size linear algebra types anyway. OneEightHundred fucked around with this message at 20:04 on May 8, 2013 |
# ? May 8, 2013 20:00 |
|
Suspicious Dish posted:This is the reasoning that bugs me. I'm certainly not going to use this library, but it's possible that a colleague does, or a former colleague does. My programming philosophies are aimed at designing APIs and making code that's simple to understand, even if it means it's a bit verbose. And if something goes wrong, I have to step through template madness in my breakpoint debugger, cursing at whoever did this. As for stepping through the code, it's actually not as difficult as you are imagining. In the most common case, where you are dealing with vectors, points, and matrices from the same underlying library, with the same unit type and the same named dimensions and the same order of those dimensions, the code you see when stepping through is just the underlying library's matrix and vector operations, with only a tiny bit of indirection. Interoperation is supported, and that is obviously hairier to step through, as you'd imagine, but that only is necessary when you are doing things like converting between different types. Suspicious Dish posted:
Suspicious Dish posted:Compare with: Suspicious Dish posted:It's probably also faster, given that the compiler might be able to inline the calls to simple math methods rather than a generic system. Anyway, all of that is unimportant in practice because that's not likely to be a bottleneck, but the fact that you get it for free is a welcomed side-effect of the API's design. Suspicious Dish posted:Note that this isn't a *new* API. I've seen it done a thousand times over, since 1996. There's a reason it's *the* API for this. Suspicious Dish posted:"left" is a relative thing. What space is it relative to? Who decides that? The user of the library? Where? Suspicious Dish posted:If, for instance, your model format has a different understanding of X/Y/Z than your engine, you can fix that once, on model load. Making something where everything is generic at runtime is a recipe for disaster. Will you generate your vertex shaders with different coordinate systems, too? Will you correct before you do the upload? That Turkey Story fucked around with this message at 20:44 on May 8, 2013 |
# ? May 8, 2013 20:42 |
|
That Turkey Story posted:I agree that that's a real problem at this point in time with the state of C++. I basically pursue this library, and really any other generic library, with the underlying assumption that eventually C++ will actually have direct support for concepts in the language at some point, in which case adapting the library to use those features will instantly mean better error messages. I'm a pragmatist. I have to work with MSVC and SunPro. Yes, it sucks, but I'm striving to make everyone I work with's life easier. If I can't do that realistically, right now, then I can't do it. That Turkey Story posted:These aren't assumptions, the function works with any compatible coordinate space. It doesn't have to know if the system is left or right handed and it doesn't need to know the relative order of the components. It works in entirely generic code. The order of the parameters should be intuitive to anyone familiar with linear algebra since it's the same order as the abstracted-away matrix and vector operations. You go left-to-right multiplying. If I have a position vector, what does scaling it twice by height even mean? A position doesn't even have a width or height. You can say "I scale it along the scene-space horizontal axis", but I would call it a stretch to call that width. I'm going to assume that scaling the width means blinding multiplying the X field of the point against the scalar, ignoring rotation, because that's what I would expect a naive point class to do if I called the "scale" function. Who decides if "height" modifies the Y or the Z vector? Both these systems are in common use, and I don't see anything in your code that decides this. You can punt and say "well, it can modify the Y or Z axis depending on something at compile time / run time", but that just makes my breakpoint debugging harder. At debug time, I see "x", "y", or "z" fields, and I have no idea what scaling the "height" does in the spur of the moment. I notice you also left the question of the order of the transformations unresolved. I still don't know what the ordering is, because I haven't read the spoiler text. That Turkey Story posted:I don't see that at all. How is transformed_point.scale(2.0, 4.0, 1); more clear than scale( width = 2.0, height 4.0 )? I don't see how anyone reading the first bit of code would have fewer questions than the second. What dimension does 2 affect, what does 4 affect, etc. It's X, Y, Z. Nobody writes anything different. The comment was for mapping your weirdo interface to mine, because I have no idea what width or height are. That Turkey Story posted:The opposite is actually true. In my library you obviously can do the equivalent of what you have shown, spread across separate statements, but I prefer the form presented in my example because it's often more efficient as well as more concise, and it minimizes top-level mutation (in fact, there is none). The reason is that underneath the hood, the different transformations can be combined, simplified, and take advantage of expression templates if the underlying library uses them (I.E. Eigen), etc. The CryTek engine had the same concept of finding inverse transforms and cancelling them out automatically, until they found that simply doing the math would be faster than trying to detect which transforms would cancel, etc. It didn't work out in practice.
|
# ? May 8, 2013 21:01 |
|
Suspicious Dish posted:If I have a position vector, what does scaling it twice by height even mean? A position doesn't even have a width or height. You can say "I scale it along the scene-space horizontal axis", but I would call it a stretch to call that width. I'm going to assume that scaling the width means blinding multiplying the X field of the point against the scalar, ignoring rotation, because that's what I would expect a naive point class to do if I called the "scale" function. Suspicious Dish posted:Who decides if "height" modifies the Y or the Z vector? Both these systems are in common use, and I don't see anything in your code that decides this. Suspicious Dish posted:You can punt and say "well, it can modify the Y or Z axis depending on something at compile time / run time", but that just makes my breakpoint debugging harder. At debug time, I see "x", "y", or "z" fields, and I have no idea what scaling the "height" does in the spur of the moment. Suspicious Dish posted:I notice you also left the question of the order of the transformations unresolved. I still don't know what the ordering is, because I haven't read the spoiler text. Suspicious Dish posted:The CryTek engine had the same concept of finding inverse transforms and cancelling them out automatically, until they found that simply doing the math would be faster than trying to detect which transforms would cancel, etc. It didn't work out in practice.
|
# ? May 8, 2013 21:42 |
|
That Turkey Story posted:Okay, this is the kind of feedback I was hoping for, and looking back I guess my comment in the code was poor. What the code is effectively doing is transforming the reference frame. So you're effectively building a transformation matrix containing those transform operations. Why not have this as two steps — build matrix from these transforms and then multiply it with the point? That's a lot clearer to what this will eventually do. That Turkey Story posted:That part is left to person using the library. In your project, you know that your vector's "y" is up or "z" is up. You tell the library this via a tiny bit of compile-time metadata at one spot in code. What's the benefit in keeping it flexible if it's a global piece of metadata? I was imagining that it would be per-point. That Turkey Story posted:Read the spoiler text. I think ultimately I need to come up with a better name than "transform" for the operation. I opted not to read the spoiler text intentionally, because the point of the exercise to test my impression of the API without knowing anything. You can certainly keep a method like perform_transform that takes one of these transform operations, but I'd keep each transform as a separate method call. That makes the ordering of the operations clear. Putting the source at the beginning is another choice, as that's fairly clear as well. Having the source at the end causes doubt in my mind. That Turkey Story posted:If you don't know whether or not y corresponds to "height" in the spur of the moment then you are sharing in my pain. If I was writing a game engine, I'd know in the spur of the moment because I'd pick one convention and stick with it for the entire engine. X/Y/Z coordinates are not "internals" of a point, they're the very things that a point contain.
|
# ? May 8, 2013 22:03 |
|
Suspicious Dish posted:So you're effectively building a transformation matrix containing those transform operations. Why not have this as two steps — build matrix from these transforms and then multiply it with the point? That's a lot clearer to what this will eventually do. auto translation_matrix = to_matrix( translation( right = 5, forward = 6 ) ); and then perform matrix multiplications, although it might not be as efficient because again, underneath the hood, I only fall-back to that kind of an implementation for unspecialized transformation types. Efficiency aside, I just want to make things generic and also eliminate thinking about the matrix side of things for the common types of transformations that people do. In my opinion, it's sort of an embarrassment of the field that in order to make progress with non-trivial game development (or most things graphical) you need to know, at least to some extent, what a homogeneous coordinate system is, what a quaternion is, how to properly accumulate transformations, etc. Even when you are using a premade engine, a lot of that stuff tends to rise to the surface. Think about how long games have been around and how low level supposedly "high level" libraries still are in many ways. Ultimately, you can describe verbally the types of operations that you are doing when navigating a scene graph to pretty much anybody who is not a mathematician or programmer and without using words like "matrix," and you're not really leaving out anything other than implementation details. There's no reason why a library can't be that abstract and it's not even really that much of a stretch. Eigen tries to do this to some extent in certain places with its transformation types, such as Translation, but it's still overall a library for the mathematically minded. I'm just trying to push things a bit further toward "just do the transformations I tell you efficiently without telling me how you did it." I agree that the exact semantics associated with high-level operations is still confusing in some ways, but I think that's more of a naming issue than anything else. Suspicious Dish posted:What's the benefit in keeping it flexible if it's a global piece of metadata? I was imagining that it would be per-point. Suspicious Dish posted:I opted not to read the spoiler text intentionally, because the point of the exercise to test my impression of the API without knowing anything. You can certainly keep a method like perform_transform that takes one of these transform operations, but I'd keep each transform as a separate method call. That makes the ordering of the operations clear. Putting the source at the beginning is another choice, as that's fairly clear as well. Having the source at the end causes doubt in my mind. Suspicious Dish posted:If I was writing a game engine, I'd know in the spur of the moment because I'd pick one convention and stick with it for the entire engine. X/Y/Z coordinates are not "internals" of a point, they're the very things that a point contain.
|
# ? May 9, 2013 00:00 |
|
That Turkey Story posted:I think I'd much rather see code like "acceleration[forward] = 5.0 * meters / sec" than "acceleration[2] = -5.0". Well, you could (and many people do) use enum's to replace those indices.
|
# ? May 9, 2013 00:11 |
|
That Turkey Story posted:My only issue with putting the object being transformed first is that that's not the order you'd expect from matrix multiplications, so if you are more familiar with this operation, you'd expect the object to be last. While I want things accessible to people who don't do much linear algebra, I also want things to be intuitive for people that do. My gut says that if I just pick a better name that conveys the importance of order and that the transformations are accumulated, it would make things more clear for both parties, but perhaps not. Well, what you put first also depends on if your matrices are row or column major, so it's hard to say what people would expect to be where.
|
# ? May 9, 2013 00:45 |
|
Splat posted:Well, what you put first also depends on if your matrices are row or column major, so it's hard to say what people would expect to be where. The code has the same result regardless of storage order. Accesses of matrices are done internally through named tensor indices.
|
# ? May 9, 2013 01:36 |
|
I've been tooling around with HaxePunk a bit and it looks extremely snazzy. I've always liked AS3 + Flixel/Flashpunk a lot from a programming standpoint, it's fun to write code with, but the Flash VM itself is really assy for much beyond simple-ish "web games". Haxe is syntactically nearly identical to AS3, with the big advantage that it can cross-compile to a bunch of targets (including Flash/AIR, C++ code for Win/Mac/Linux/iOS, and its own cross platform VM "Neko"). It also has a command line tool, haxelib, which provides a shared repository for libraries (a la Ruby gems), including a port of Box2D as well as a good amount of other stuff. HaxePunk, as you'd expect, is a Haxe port of Flashpunk (built on top of the NME framework, which adds a bunch of stuff to the vanilla Haxe API). I haven't gotten much further with it than moving a sprite around the screen, but it's a really easy transition from AS3/Flashpunk, and there are some nice conveniences for project/build management. From the command line, I can just type "haxelib run HaxePunk new fooProject", and it'll create a skeleton project with subdirectories for src/assets/bin, a .nmml project file, etc.) Building/running the project is as simple as "nme test fooProject.nmml flash". Other stuff that I really like: - asset management is a lot easier than Flashpunk; just define your asset directories in the .nmml and then you can write "graphic = new Image("gfx/block.png")" without having to manually embed the assets - support for gamepads and multitouch (for compile targets where that makes sense) - performance of Haxe/NME code compiled to SWF/AIR is supposed to compare very favorably to the Adobe mxmlc compiler I first got seriously curious about this stuff when I read about how Defender's Quest (which is a great tower defense/RPG hybrid) was developed using NME, and having a port of the Flashpunk API is just awesome. If, like me, you're interested in developing 2D stuff where bleeding edge performance isn't a requirement, this is definitely worth a look. It looks like there is also a Haxe Flixel port (HaxeFlixel) if Flixel is your thing. h_double fucked around with this message at 04:18 on May 9, 2013 |
# ? May 9, 2013 01:41 |
|
That Turkey Story posted:The code has the same result regardless of storage order. Accesses of matrices are done internally through named tensor indices. I didn't say it would have different results. I'm saying I work in row major stuff all day and you normally end up putting your point on the left, not the right. It was in response to: That Turkey Story posted:My only issue with putting the object being transformed first is that that's not the order you'd expect from matrix multiplications, so if you are more familiar with this operation, you'd expect the object to be last. For me it is.
|
# ? May 9, 2013 01:50 |
|
That Turkey Story posted:Efficiency aside, I just want to make things generic and also eliminate thinking about the matrix side of things for the common types of transformations that people do. In my opinion, it's sort of an embarrassment of the field that in order to make progress with non-trivial game development (or most things graphical) you need to know, at least to some extent, what a homogeneous coordinate system is, what a quaternion is, how to properly accumulate transformations, etc. This is because this is the most flexible model for doing complex transformations. You can build simple tools on top of matrices, like "turn left", "turn right", but you can't really expect people to grok 3D graphics without understanding the idea of relative and absolute spaces. That still doesn't answer the question "relative to what". What if I want to have a billboard sprite that always faces the camera? (Still common even today in games, but more cleverly hidden). If your simple API only allows me to specify transforms in the absolute space, I need to step down to the matrix level. Most of this flexibility *is* needed to build the varying range of effects common in today's games. Once you get into complex shading, you'll need to know a lot about how camera space and the absolute space interact, etc. It's an abstraction where you can provide a toy learning API, but not an abstraction that you can skip entirely. Soya3D attempted the same thing and then gave up and added matrices at the API level because it's a needed thing.
|
# ? May 9, 2013 02:21 |
|
h_double posted:I've been tooling around with HaxePunk a bit and it looks extremely snazzy. I checked out both of these because of your post and HaxeFlixel seems a lot less intuitive than HaxePunk. I loved both libraries in AS3 and eventually settled with Flixel, but HaxePunk is literally so drat easy to use. Thanks so much for the reference. Just having a bit of trouble trying to get a particle emitter going, not many tutorials isn't very helpful unfortunately. edit: I worked it out! HaxePunk is really great. Megadrive fucked around with this message at 12:01 on May 9, 2013 |
# ? May 9, 2013 09:17 |
|
Splat posted:I didn't say it would have different results. I'm saying I work in row major stuff all day and you normally end up putting your point on the left, not the right. Suspicious Dish posted:You can build simple tools on top of matrices, like "turn left", "turn right", but you can't really expect people to grok 3D graphics without understanding the idea of relative and absolute spaces It's also not like there's no such thing as relative and absolute spaces anymore when you use the library. Rather, that's precisely what is there, it's just that the concept is intentionally divorced from the representation as matrices. Transformation between spaces isn't intrinsically linked to matrices, and ultimately matrices can only even represent linear mappings between spaces. We just use matrices because they are able to represent the most common kinds of transformations that are used in graphics and games in a single type that is able to be worked with efficiently (though even then, not all types of transformations used in modern games can be done simply with matrix multiplication). Suspicious Dish posted:That still doesn't answer the question "relative to what". What if I want to have a billboard sprite that always faces the camera? (Still common even today in games, but more cleverly hidden). If your simple API only allows me to specify transforms in the absolute space, I need to step down to the matrix level. Suspicious Dish posted:It's an abstraction where you can provide a toy learning API, but not an abstraction that you can skip entirely. Soya3D attempted the same thing and then gave up and added matrices at the API level because it's a needed thing.
|
# ? May 9, 2013 17:32 |
|
I agree with your "memory management is hard, so don't bother with abstractions" statement partially. Before "the object model" that every system uses today, there were other abstractions over memory that didn't quite work out in practice. It was a tough problem, but not an unsolvable one, and we pulled through successfully. Similarly, "visual programming" would be a similar problem that hasn't seen a good solution. The existing "visual programming" solutions aren't very good, but I'll hold out faith that it's a research topic, and a newer system might prove the concept. I'm simply expressing skepticism at your claim to abstract away fundamentally complex topics into a general-purpose usable API. I want to stress not that it can't be done, but that your abstraction may not be as good as you claim, considering I can see obvious parallels to Soya3D and other game engines, and their experience shows it didn't quite work out. I think I'm getting farther away from the C++ metaprogramming I originally complained about, on which I think we're going to have to agree to disagree.
|
# ? May 9, 2013 17:56 |
|
That Turkey Story posted:Okay, I see. I thought you just meant that you were changing storage order, not access. In other words you use row-vectors as opposed to column-vectors, work with the transpose of what I would call a transformation matrix, and order everything the other way around? Is this common? Is this at a company or just an independent project? I think I can understand expecting the reverse order of parameters in that case, though the library still needs to pick one convention since that's all thankfully abstracted away. Whichever order I use, I want that order to be recognizable from the name. You shouldn't have to care about backend details like that at a high level. It's company, not personal. No idea how common.
|
# ? May 9, 2013 19:05 |
|
I have recently started a new project, we are building a RPG in the Final Fantasy style with me as dev/design and a friend as the writer. In Previous projects I have written my own engines but I always end up spending more time playing with my code than doing the hard work of game design. At the moment I just want something that provides all the boring bits for me and lets me focus on being a designer. I am giving RPG Maker VX Ace a try. I was very pleased to see that RPG Maker uses Ruby for all its scripting but I am very not pleased with how it stores the files or how bad the interface is (seriously, naming your tabs 1, 2 and 3?!). That said it seems to provide everything I need for my current project which is better than my homebrew code base. Most of the interface and game code will end up being modified during this process and I want to keep track of my changes. I am using Git to keep my files in order but since they are serialized Ruby objects I don't get meaningful diffs. I have found a script that will import/export all of the scripts in the game to a folder(http://forums.rpgmakerweb.com/index.php?/topic/9430-script-importexport/) which is quite useful but only covers the actual ruby scripts in the game. I would like to keep plain text copies of all my configuration files, the ability to modify them and use the changes would be cool but that is less important than keeping track of changes. Has anyone had experience doing this? Depending on how well Ruby can serialize to text this will be easy to super lame. tl;dr: Does anyone here use RPG Maker and if so how do you manage your files and changes?
|
# ? May 10, 2013 03:33 |
|
aerique posted:It's been a while but try some float values instead of the integers you're using now, so instead of 15 try for example 15.35. Or just stay between 0 and 1. This was a page back, but I just wanted to give a shout and huge thanks for helping me fix my stupid! code:
code:
Yes, I read the discussion on procedurally generated worlds, I'm just trying to learn how to generate random worlds to test different scenarios Winkle-Daddy fucked around with this message at 17:33 on May 10, 2013 |
# ? May 10, 2013 17:24 |
|
Shader question. I am fairly inexperienced with using shaders, but I seem to be getting the hang of it. Last night I realized I wanted a 'hue shift' shader for various effects (like an enemy flashing when they're hit). I implemented this by adding a new float to by vertex data and then passing it in using some semantic that I wasn't using (BLENDWEIGHT, I think, which is a single float). The shader then uses that value--between 0 and 1--to implement a hue shift (e.g., a hue shift of .5 would make something red into something green). So, uh, it worked pretty well the first try. Is what I did standard practice, just grabbing some random unused semantic and adding extra data to vertices to use for whatever you want in a shader?
|
# ? May 10, 2013 19:42 |
|
quote:Is what I did standard practice, just grabbing some random unused semantic and adding extra data to vertices to use for whatever you want in a shader?
|
# ? May 10, 2013 20:25 |
|
Spatial posted:The proper way is to use uniforms, which you can set by name from your main program.
|
# ? May 10, 2013 20:36 |
|
Orzo posted:So, uh, it worked pretty well the first try. Is what I did standard practice, just grabbing some random unused semantic and adding extra data to vertices to use for whatever you want in a shader? Yes. Vertex data doesn't have to be coordinates. It doesn't have to be anything, really. As long as you eventually get something done through the fragment shader, it's fine.
|
# ? May 10, 2013 20:43 |
|
I see. Why even confuse things with semantics in the first place, then? What is the point in *requiring* that I label a certain chunk of vertex data 'oh this is a COLOR4' but then using the 4 values for something completely custom?
|
# ? May 10, 2013 20:45 |
|
Winkle-Daddy posted:The only thing I'm not really grasping here is, I was under the impression the values I supplied to pnoise2 were the X and Y coordinates, so that I could do something like: I'm not familiar with the module and it has been a couple of years since I have played with perlin noise, perhaps someone more knowledgeable will correct me, but... pnoise2 is a two-dimensional function: you supply it with two values and it will return a value between -1 and 1:
If x2 and y2 are near x1 and y1 then z2 will be near z1. The code you posted should work, except I would do the object assignment as such: code:
|
# ? May 10, 2013 20:46 |
|
Orzo posted:I see. Why even confuse things with semantics in the first place, then? What is the point in *requiring* that I label a certain chunk of vertex data 'oh this is a COLOR4' but then using the 4 values for something completely custom? Because history, because there wasn't always such a flexible system and because clever people like yourself developed techniques that worked fine, and there isn't really a need to introduce a new API that loses the specifics when all it will do is break on old systems, and give new churn for everybody to port to. Doesn't stop the GL people from trying, though.
|
# ? May 10, 2013 20:48 |
|
Yeah, I kind of guessed the reasons were historical. Anyway, thanks!
|
# ? May 10, 2013 21:02 |
|
aerique posted:I'm not familiar with the module and it has been a couple of years since I have played with perlin noise, perhaps someone more knowledgeable will correct me, but... This feels kind of "wrong" from a usability standpoint, but that helped a lot. After reminding myself of some of the odd ways python treats floats by default, I think this is what I'm looking for: code:
Just in case someone else saw this and wants to try something similar, here is a bare bones example: code:
|
# ? May 10, 2013 21:20 |
|
Suspicious Dish posted:Because history, because there wasn't always such a flexible system and because clever people like yourself developed techniques that worked fine, and there isn't really a need to introduce a new API that loses the specifics when all it will do is break on old systems, and give new churn for everybody to port to. It sounds like you're using OpenGL, in which case you should just use attrib streams exclusively. Mixing attrib and named streams is actually a bad idea because while the spec allows you to do it, NVIDIA has a non-compliant implementation that aliases some named streams to attrib streams to support badly-written software that was dependent on that behavior. OneEightHundred fucked around with this message at 23:33 on May 10, 2013 |
# ? May 10, 2013 23:29 |
|
As noted above, I'm using DirectX, not OpenGL. But it's DX9, which is probably why I still have to worry about semantics, according to what you're saying.
|
# ? May 10, 2013 23:58 |
|
If I remember correctly, DX9 isn't an issue of it being historical as much as that DX9 still has the fixed-function pipeline and stream sources are still bound to the fixed-function enumerations. In other words, they're named what they are because if you don't bind a vertex shader, that's what those values will be used for, and in turn, what they're referred to as in the C/C++ API.
OneEightHundred fucked around with this message at 17:30 on May 11, 2013 |
# ? May 11, 2013 08:38 |
|
That Turkey Story posted:Hey, I just want to say that what you're doing is really interesting, and I would love to see more generic programming stuff applied to gamedev. Hope you post more stuff in the future.
|
# ? May 11, 2013 10:29 |
|
a slime posted:Hey, I just want to say that what you're doing is really interesting, and I would love to see more generic programming stuff applied to gamedev. Hope you post more stuff in the future. Thanks. I'm much more used to getting criticized for trying to do generic stuff and use boost or templates in general in game development. It's sort of a part of the culture. I'm optimistic about the usefulness of this getting over that though, and hope that I'll be able to show more people that it's a worthwhile effort. Right now I'm working on docs and consolidating some other sublibraries into one repository, and then I'll get it online.
|
# ? May 11, 2013 13:43 |
|
Does anyone have any good links on terrain generation they'd like to share? My cursory googling brings some stuff up, not all of it is wholly readable and not all of it is wholly useful. I figure I'll start small with 2d maps and then work my way up if that helps narrow it down at all.
|
# ? May 11, 2013 23:38 |
|
That Turkey Story posted:I'm much more used to getting criticized for trying to do generic stuff and use boost or templates in general in game development. And, honestly, rightly so. This kind of "abstraction" is not helpful. To begin with, it doesn't really save you having to understand the underlying math. You still have to understand for example what a transformation is, what a dot product is, and when and why to use them. All this kind of wrapper code does is save you from having to know how to do a matrix multiplication or how to form a quaternion, which is a minor gain at best and dubious one at that. Useful abstractions are at a much higher level than this -- things like "look at", or "move to", or "follow". Secondly, compile times and debug build speed are real issues. Adding a ton of template boilerplate to nearly every source file is nowhere near worth the small gain in genericness you're getting. And that genericness isn't even that useful. You pick a handedness, orientation, and row/column convention and stick with it. Being able to swap those around on the fly makes things more confusing, not less. Having a bunch of temporary objects flying all over the place when you're trying to do something relatively simple doesn't help anyone. Yeah, you can hope the compiler is smart enough to optimize it all away, but an unusably slow debug build is no good either. This kind of expression template stuff is interesting in an academic sense, but I wouldn't want anything like it anywhere near any actual engine code I was working on.
|
# ? May 12, 2013 07:02 |
|
Tres Burritos posted:Does anyone have any good links on terrain generation they'd like to share? I wrote this about a year ago when I was toying around with a terrain engine. It's 3d and voxels so a bit more complex than what you're asking for but the basic concept of starting with simple mathematical terms and combining them to get increasing complexity is the same pattern. http://code-freeze.blogspot.com/2012/04/rebuilding-world-one-iteration-at-time.html
|
# ? May 12, 2013 10:38 |
|
|
# ? May 23, 2024 18:14 |
|
That Turkey Story posted:
That really is the sticking point: transform( scale( width = 2.0, height = 4.0 ), translation( left = 0.5 * meters ), position); versus transform( translation( left = 0.5 * meters ), scale( width = 2.0, height = 4.0 ), position); One of those is what I want, the other one is completely wrong. The value of having a higher-level API should be in the API making it obvious what I need to do to get what I want. Or failing that, having some clues that help me figure out which one is which. Here's a slightly changed version with the goal of sidestepping the problem: C++ code:
Implementing fuller support for going between frames of reference is left as an exercise for the author
|
# ? May 12, 2013 15:35 |