Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suspicious Dish posted:

Compare with:

C++ code:
  auto const transformed_point = position;
  transformed_point.scale(2.0, 4.0, 0); // I'm assuming width = X and height = Y, but who knows.
  transformed_point.translate(-0.5 * meters, 0, 0); // left = negative X
  transformed_point.roll(60.0 * degrees);
Don't use transform-in-place in C++. :colbert:

C++ code:
  auto const transformed_point = position.scale(2.0, 4.0, 0).translate(-0.5 * meters, 0, 0).roll(60.0 * degrees);

Adbot
ADBOT LOVES YOU

Goreld
May 8, 2002

"Identity Crisis" MurdererWild Guess Bizarro #1Bizarro"Me am first one I suspect!"

Orzo posted:

I don't follow. What would be an example of something that doesn't use 'x/y' or 'x/y/z'?

Anything with splines or tensor surfaces could use parameterizations (ie. a 't' variable).

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Sorry, I don't claim to know C++ conventions. All I know is in that in C it would be:

C code:
Point p = { 1.0, 2.0, 3.0 };
point_scale (&p, 2.0, 4.0, 1.0);
point_translate (&p, -0.5, 0.0, 0.0);
point_roll (&p, 60.0);

Splat
Aug 22, 2002

OneEightHundred posted:

Don't use transform-in-place in C++. :colbert:

C++ code:
  auto const transformed_point = position.scale(2.0, 4.0, 0).translate(-0.5 * meters, 0, 0).roll(60.0 * degrees);

What's your reasoning?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Splat posted:

What's your reasoning?
In-place modify has you do what's essentially register allocation manually (that is, you have to explicitly make new places for intermediate values to live to avoid unintentionally modifying something), and if you make a mistake, you'll get bugs caused by unexpected modifications. Operate-and-assign instead of modify is harder to make mistakes with and the compiler will optimize it into in-place modify if the destination is the source anyway.

The only drawback is that it can poo poo up things if value creation/destruction is expensive (i.e. things that are dynamically allocated), but C++11 solves that problem, and you shouldn't be running into that with fixed-size linear algebra types anyway.

OneEightHundred fucked around with this message at 20:04 on May 8, 2013

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

This is the reasoning that bugs me. I'm certainly not going to use this library, but it's possible that a colleague does, or a former colleague does. My programming philosophies are aimed at designing APIs and making code that's simple to understand, even if it means it's a bit verbose. And if something goes wrong, I have to step through template madness in my breakpoint debugger, cursing at whoever did this.
I agree that that's a real problem at this point in time with the state of C++. I basically pursue this library, and really any other generic library, with the underlying assumption that eventually C++ will actually have direct support for concepts in the language at some point, in which case adapting the library to use those features will instantly mean better error messages. Those who don't care about the craziness behind the scenes will use it in its current state, and those who do won't. Even after the language progresses, I'm sure some people still would rather just do things as tightly as possible rather than use higher level generic code. This library doesn't target those kinds of people. If you hate boost, for example then you wouldn't use this no matter how much language support there was. Ultimately, I really don't care about that. In the end I make this stuff because it is important to me and others in the community that are interested in generic programming.

As for stepping through the code, it's actually not as difficult as you are imagining. In the most common case, where you are dealing with vectors, points, and matrices from the same underlying library, with the same unit type and the same named dimensions and the same order of those dimensions, the code you see when stepping through is just the underlying library's matrix and vector operations, with only a tiny bit of indirection. Interoperation is supported, and that is obviously hairier to step through, as you'd imagine, but that only is necessary when you are doing things like converting between different types.

Suspicious Dish posted:

C++ code:
  auto const transformed_point
    = transform( scale( width = 2.0, height = 4.0 )
               , translation( left = 0.5 * meters )
               , rotation( roll = 60.0 * degrees )
               , position                           // What to transform
               );
There's lots of assumptions here, like what space we're working in, and what order the transformations are being applied in (the source is at the end, so does that mean we work backwards?)
These aren't assumptions, the function works with any compatible coordinate space. It doesn't have to know if the system is left or right handed and it doesn't need to know the relative order of the components. It works in entirely generic code. The order of the parameters should be intuitive to anyone familiar with linear algebra since it's the same order as the abstracted-away matrix and vector operations. You go left-to-right multiplying.

Suspicious Dish posted:

Compare with:

C++ code:
  auto const transformed_point = position;
  transformed_point.scale(2.0, 4.0, 0); // I'm assuming width = X and height = Y, but who knows.
  transformed_point.translate(-0.5 * meters, 0, 0); // left = negative X
  transformed_point.roll(60.0 * degrees);
The first thing I want to point out is that it's less lines of code to use. It's a bit more verbose, but I think it's a lot more explicit. There's subtle things in that API too, like that I'm passing three parameters in the scale and transform -- one or two is ambiguous, and with a method name like "scaleX" or "transformX", it's easy to miss.

And now that all the transformations are independent methods, I can inspect the intermediate result without understanding the internals of the API -- if I get a wrong result, I have a clear debug path that's not delving into a metasystem.
I don't see that at all. How is transformed_point.scale(2.0, 4.0, 1); more clear than scale( width = 2.0, height 4.0 )? I don't see how anyone reading the first bit of code would have fewer questions than the second. What dimension does 2 affect, what does 4 affect, etc. The associated comment in this line is a crutch being used because the code itself isn't self-descriptive. Are you going to have a similar comment everywhere you do a transformation, explaining what each parameter means every time? Are you going to leave it out because it's the same throughout this project? This does not help new programmers at all when the enter the project.

Suspicious Dish posted:

It's probably also faster, given that the compiler might be able to inline the calls to simple math methods rather than a generic system.
The opposite is actually true. In my library you obviously can do the equivalent of what you have shown, spread across separate statements, but I prefer the form presented in my example because it's often more efficient as well as more concise, and it minimizes top-level mutation (in fact, there is none). The reason is that underneath the hood, the different transformations can be combined, simplified, and take advantage of expression templates if the underlying library uses them (I.E. Eigen), etc. It allows the backend to perform high-level optimizations that your approach cannot do. There is a mistaken notion by a lot of people that more abstraction means slower code. While that is true in many language, in C++ the opposite is usually true, since you are getting your generic behavior simply through compile-time decided dispatching of overloads and templates.

Anyway, all of that is unimportant in practice because that's not likely to be a bottleneck, but the fact that you get it for free is a welcomed side-effect of the API's design.

Suspicious Dish posted:

Note that this isn't a *new* API. I've seen it done a thousand times over, since 1996. There's a reason it's *the* API for this.
It's "the API" that is subtly different everywhere you go.

Suspicious Dish posted:

"left" is a relative thing. What space is it relative to? Who decides that? The user of the library? Where?
I'm not sure what you're asking here. You seem to be thinking that it's introducing the very problems that it actually solves. "left" acts as a way to communicate the component values without having to know anything about the underlying representation.

Suspicious Dish posted:

If, for instance, your model format has a different understanding of X/Y/Z than your engine, you can fix that once, on model load. Making something where everything is generic at runtime is a recipe for disaster. Will you generate your vertex shaders with different coordinate systems, too? Will you correct before you do the upload?
What you described first is what's happening -- it's converted appropriately on model load based on either qualities intrinsic to the format or what is explicitly specified at the call-site. These aren't run-time abstractions, these are compile-time abstractions. There's no fancy run-time stuff like type-erasure going on.

That Turkey Story fucked around with this message at 20:44 on May 8, 2013

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

That Turkey Story posted:

I agree that that's a real problem at this point in time with the state of C++. I basically pursue this library, and really any other generic library, with the underlying assumption that eventually C++ will actually have direct support for concepts in the language at some point, in which case adapting the library to use those features will instantly mean better error messages.

I'm a pragmatist. I have to work with MSVC and SunPro. Yes, it sucks, but I'm striving to make everyone I work with's life easier. If I can't do that realistically, right now, then I can't do it.

That Turkey Story posted:

These aren't assumptions, the function works with any compatible coordinate space. It doesn't have to know if the system is left or right handed and it doesn't need to know the relative order of the components. It works in entirely generic code. The order of the parameters should be intuitive to anyone familiar with linear algebra since it's the same order as the abstracted-away matrix and vector operations. You go left-to-right multiplying.

If I have a position vector, what does scaling it twice by height even mean? A position doesn't even have a width or height. You can say "I scale it along the scene-space horizontal axis", but I would call it a stretch to call that width. I'm going to assume that scaling the width means blinding multiplying the X field of the point against the scalar, ignoring rotation, because that's what I would expect a naive point class to do if I called the "scale" function.

Who decides if "height" modifies the Y or the Z vector? Both these systems are in common use, and I don't see anything in your code that decides this. You can punt and say "well, it can modify the Y or Z axis depending on something at compile time / run time", but that just makes my breakpoint debugging harder. At debug time, I see "x", "y", or "z" fields, and I have no idea what scaling the "height" does in the spur of the moment.

I notice you also left the question of the order of the transformations unresolved. I still don't know what the ordering is, because I haven't read the spoiler text.

That Turkey Story posted:

I don't see that at all. How is transformed_point.scale(2.0, 4.0, 1); more clear than scale( width = 2.0, height 4.0 )? I don't see how anyone reading the first bit of code would have fewer questions than the second. What dimension does 2 affect, what does 4 affect, etc.

It's X, Y, Z. Nobody writes anything different. The comment was for mapping your weirdo interface to mine, because I have no idea what width or height are.

That Turkey Story posted:

The opposite is actually true. In my library you obviously can do the equivalent of what you have shown, spread across separate statements, but I prefer the form presented in my example because it's often more efficient as well as more concise, and it minimizes top-level mutation (in fact, there is none). The reason is that underneath the hood, the different transformations can be combined, simplified, and take advantage of expression templates if the underlying library uses them (I.E. Eigen), etc.

The CryTek engine had the same concept of finding inverse transforms and cancelling them out automatically, until they found that simply doing the math would be faster than trying to detect which transforms would cancel, etc. It didn't work out in practice.

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

If I have a position vector, what does scaling it twice by height even mean? A position doesn't even have a width or height. You can say "I scale it along the scene-space horizontal axis", but I would call it a stretch to call that width. I'm going to assume that scaling the width means blinding multiplying the X field of the point against the scalar, ignoring rotation, because that's what I would expect a naive point class to do if I called the "scale" function.
Okay, this is the kind of feedback I was hoping for, and looking back I guess my comment in the code was poor. What the code is effectively doing is transforming the reference frame. The operation that is being represented here is the type of operation that takes place as you navigate through a scene graph -- inner product of the transformations. In most cases this just involves translation, rotation, and projection, depending on whether or not you keep it separate. I threw scale into the example because it's simpler than a projection and I wanted to show that it works with arbitrary transformations. Internally, it deals with certain transformations optimally when it is easy to do so (I.E. a translation doesn't get converted to a matrix when directly applied to a point, it's actually the equivalent of vector addition), and when you throw in complicated things like camera projections, it "converts" the projection to a homogeneous matrix representation and does the raw operation as inner product using the underlying library.

Suspicious Dish posted:

Who decides if "height" modifies the Y or the Z vector? Both these systems are in common use, and I don't see anything in your code that decides this.
That part is left to person using the library. In your project, you know that your vector's "y" is up or "z" is up. You tell the library this via a tiny bit of compile-time metadata at one spot in code. It's basically a little type traits kind of thing, or in generic programming terms, a "concept map." You're telling the library "in order to access the 'forward' component of this vector type, you do your_vector[2]." If your "z" dimension goes "back" instead of "forward", you can tell it that too, in which case if a function attempts to access "forward," it simply receives a negated view of the element without the person writing the generic function to have to think about it.

Suspicious Dish posted:

You can punt and say "well, it can modify the Y or Z axis depending on something at compile time / run time", but that just makes my breakpoint debugging harder. At debug time, I see "x", "y", or "z" fields, and I have no idea what scaling the "height" does in the spur of the moment.
If you don't know whether or not y corresponds to "height" in the spur of the moment then you are sharing in my pain. If the underlying type you're using just has x, y, and z, or indices then you're right, you're going to have to think, but at least you only have to think about it during debugging. At some point or another you'll have to see internals, I just want to minimize that. Maybe I'm trading out required knowledge of the domain for a required deeper knowledge of the language, but I'd gladly take that trade since knowledge of the language is more generally applicable. I just want a library to handle as much of the domain-specific stuff as possible.

Suspicious Dish posted:

I notice you also left the question of the order of the transformations unresolved. I still don't know what the ordering is, because I haven't read the spoiler text.
Read the spoiler text. I think ultimately I need to come up with a better name than "transform" for the operation.

Suspicious Dish posted:

The CryTek engine had the same concept of finding inverse transforms and cancelling them out automatically, until they found that simply doing the math would be faster than trying to detect which transforms would cancel, etc. It didn't work out in practice.
I don't try to do anything horribly sophisticated like that. I just handle the common, simple transformations like translations and scales and combinations of them, falling back to matrices if no specialization is provided. If the backend uses expression templates, then those get used implicitly. If it doesn't, then the code is basically equivalent to your code that's spread across multiple statements.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

That Turkey Story posted:

Okay, this is the kind of feedback I was hoping for, and looking back I guess my comment in the code was poor. What the code is effectively doing is transforming the reference frame.

So you're effectively building a transformation matrix containing those transform operations. Why not have this as two steps — build matrix from these transforms and then multiply it with the point? That's a lot clearer to what this will eventually do.

That Turkey Story posted:

That part is left to person using the library. In your project, you know that your vector's "y" is up or "z" is up. You tell the library this via a tiny bit of compile-time metadata at one spot in code.

What's the benefit in keeping it flexible if it's a global piece of metadata? I was imagining that it would be per-point.

That Turkey Story posted:

Read the spoiler text. I think ultimately I need to come up with a better name than "transform" for the operation.

I opted not to read the spoiler text intentionally, because the point of the exercise to test my impression of the API without knowing anything. You can certainly keep a method like perform_transform that takes one of these transform operations, but I'd keep each transform as a separate method call. That makes the ordering of the operations clear. Putting the source at the beginning is another choice, as that's fairly clear as well. Having the source at the end causes doubt in my mind.

That Turkey Story posted:

If you don't know whether or not y corresponds to "height" in the spur of the moment then you are sharing in my pain.

If I was writing a game engine, I'd know in the spur of the moment because I'd pick one convention and stick with it for the entire engine. X/Y/Z coordinates are not "internals" of a point, they're the very things that a point contain.

That Turkey Story
Mar 30, 2003

Suspicious Dish posted:

So you're effectively building a transformation matrix containing those transform operations. Why not have this as two steps — build matrix from these transforms and then multiply it with the point? That's a lot clearer to what this will eventually do.
If you really want to, you can do that in code. All "Transformation" types can be converted to a matrix (the ability to do so is a requirement of the concept). You can just write:

auto translation_matrix = to_matrix( translation( right = 5, forward = 6 ) );

and then perform matrix multiplications, although it might not be as efficient because again, underneath the hood, I only fall-back to that kind of an implementation for unspecialized transformation types. Efficiency aside, I just want to make things generic and also eliminate thinking about the matrix side of things for the common types of transformations that people do. In my opinion, it's sort of an embarrassment of the field that in order to make progress with non-trivial game development (or most things graphical) you need to know, at least to some extent, what a homogeneous coordinate system is, what a quaternion is, how to properly accumulate transformations, etc. Even when you are using a premade engine, a lot of that stuff tends to rise to the surface. Think about how long games have been around and how low level supposedly "high level" libraries still are in many ways. Ultimately, you can describe verbally the types of operations that you are doing when navigating a scene graph to pretty much anybody who is not a mathematician or programmer and without using words like "matrix," and you're not really leaving out anything other than implementation details. There's no reason why a library can't be that abstract and it's not even really that much of a stretch. Eigen tries to do this to some extent in certain places with its transformation types, such as Translation, but it's still overall a library for the mathematically minded. I'm just trying to push things a bit further toward "just do the transformations I tell you efficiently without telling me how you did it." I agree that the exact semantics associated with high-level operations is still confusing in some ways, but I think that's more of a naming issue than anything else.

Suspicious Dish posted:

What's the benefit in keeping it flexible if it's a global piece of metadata? I was imagining that it would be per-point.
It's per type. In practice it's likely even per project if you just have one vector type that you use. If you're using the same type for different kinds of points, you can always just adapt them at the call site with a reference-wrapper, but I don't think that's all that common of a concern. Think of it sort of like the benefits you get from supporting iterator concepts for a container you are developing -- you immediately get the ability to use the algorithms of the STL without even having to use the STL's container types. The underlying concepts used here are analogous to some extent. By communicating to the library how your types work, you can take advantage of the functions it provides without making any intrusive changes to your game (I.E. you don't have to adopt a certain engine, you don't have to adopt a certain vector or vertex type). You can just pull in the library, use a couple of functions, and not change your existing code. I'm still a ways off and it's so ambitious that I don't know if I'll even get there, but that's the ultimate goal I'm working toward. I want to create a way to use and develop generic components for games and simulations that aren't bound to a particular engine. This spatial library is one little aspect of it, but now it's just about complete for most uses, aside from possibly better naming. I like to think that it's possible to have truly generic algorithms for things like model loading, space partitioning, etc. that can be used with any engine given the proper concept maps. That's a big difference from what a programmer has to do now, which is generally either roll such implementations from scratch for an in-house engine or use an existing engine that did that work already.

Suspicious Dish posted:

I opted not to read the spoiler text intentionally, because the point of the exercise to test my impression of the API without knowing anything. You can certainly keep a method like perform_transform that takes one of these transform operations, but I'd keep each transform as a separate method call. That makes the ordering of the operations clear. Putting the source at the beginning is another choice, as that's fairly clear as well. Having the source at the end causes doubt in my mind.
My only issue with putting the object being transformed first is that that's not the order you'd expect from matrix multiplications, so if you are more familiar with this operation, you'd expect the object to be last. While I want things accessible to people who don't do much linear algebra, I also want things to be intuitive for people that do. My gut says that if I just pick a better name that conveys the importance of order and that the transformations are accumulated, it would make things more clear for both parties, but perhaps not.

Suspicious Dish posted:

If I was writing a game engine, I'd know in the spur of the moment because I'd pick one convention and stick with it for the entire engine. X/Y/Z coordinates are not "internals" of a point, they're the very things that a point contain.
As someone who has jumped into projects midway through, dealing with varying types and conventions, this is all well and good for the person who's been there from the start, but frustrating for others. It doesn't take much time to adapt, but it's still an annoyance. Even when you have been working on a project from the start, I think I'd much rather see code like "acceleration[forward] = 5.0 * meters / sec" than "acceleration[2] = -5.0".

Goreld
May 8, 2002

"Identity Crisis" MurdererWild Guess Bizarro #1Bizarro"Me am first one I suspect!"

That Turkey Story posted:

I think I'd much rather see code like "acceleration[forward] = 5.0 * meters / sec" than "acceleration[2] = -5.0".

Well, you could (and many people do) use enum's to replace those indices.

Splat
Aug 22, 2002

That Turkey Story posted:

My only issue with putting the object being transformed first is that that's not the order you'd expect from matrix multiplications, so if you are more familiar with this operation, you'd expect the object to be last. While I want things accessible to people who don't do much linear algebra, I also want things to be intuitive for people that do. My gut says that if I just pick a better name that conveys the importance of order and that the transformations are accumulated, it would make things more clear for both parties, but perhaps not.

Well, what you put first also depends on if your matrices are row or column major, so it's hard to say what people would expect to be where.

That Turkey Story
Mar 30, 2003

Splat posted:

Well, what you put first also depends on if your matrices are row or column major, so it's hard to say what people would expect to be where.

The code has the same result regardless of storage order. Accesses of matrices are done internally through named tensor indices.

h_double
Jul 27, 2001
I've been tooling around with HaxePunk a bit and it looks extremely snazzy.

I've always liked AS3 + Flixel/Flashpunk a lot from a programming standpoint, it's fun to write code with, but the Flash VM itself is really assy for much beyond simple-ish "web games". Haxe is syntactically nearly identical to AS3, with the big advantage that it can cross-compile to a bunch of targets (including Flash/AIR, C++ code for Win/Mac/Linux/iOS, and its own cross platform VM "Neko"). It also has a command line tool, haxelib, which provides a shared repository for libraries (a la Ruby gems), including a port of Box2D as well as a good amount of other stuff.

HaxePunk, as you'd expect, is a Haxe port of Flashpunk (built on top of the NME framework, which adds a bunch of stuff to the vanilla Haxe API). I haven't gotten much further with it than moving a sprite around the screen, but it's a really easy transition from AS3/Flashpunk, and there are some nice conveniences for project/build management. From the command line, I can just type "haxelib run HaxePunk new fooProject", and it'll create a skeleton project with subdirectories for src/assets/bin, a .nmml project file, etc.) Building/running the project is as simple as "nme test fooProject.nmml flash".

Other stuff that I really like:

- asset management is a lot easier than Flashpunk; just define your asset directories in the .nmml and then you can write "graphic = new Image("gfx/block.png")" without having to manually embed the assets
- support for gamepads and multitouch (for compile targets where that makes sense)
- performance of Haxe/NME code compiled to SWF/AIR is supposed to compare very favorably to the Adobe mxmlc compiler


I first got seriously curious about this stuff when I read about how Defender's Quest (which is a great tower defense/RPG hybrid) was developed using NME, and having a port of the Flashpunk API is just awesome. If, like me, you're interested in developing 2D stuff where bleeding edge performance isn't a requirement, this is definitely worth a look.

It looks like there is also a Haxe Flixel port (HaxeFlixel) if Flixel is your thing.

h_double fucked around with this message at 04:18 on May 9, 2013

Splat
Aug 22, 2002

That Turkey Story posted:

The code has the same result regardless of storage order. Accesses of matrices are done internally through named tensor indices.

I didn't say it would have different results. I'm saying I work in row major stuff all day and you normally end up putting your point on the left, not the right. It was in response to:

That Turkey Story posted:

My only issue with putting the object being transformed first is that that's not the order you'd expect from matrix multiplications, so if you are more familiar with this operation, you'd expect the object to be last.

For me it is.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

That Turkey Story posted:

Efficiency aside, I just want to make things generic and also eliminate thinking about the matrix side of things for the common types of transformations that people do. In my opinion, it's sort of an embarrassment of the field that in order to make progress with non-trivial game development (or most things graphical) you need to know, at least to some extent, what a homogeneous coordinate system is, what a quaternion is, how to properly accumulate transformations, etc.

This is because this is the most flexible model for doing complex transformations. You can build simple tools on top of matrices, like "turn left", "turn right", but you can't really expect people to grok 3D graphics without understanding the idea of relative and absolute spaces. That still doesn't answer the question "relative to what". What if I want to have a billboard sprite that always faces the camera? (Still common even today in games, but more cleverly hidden). If your simple API only allows me to specify transforms in the absolute space, I need to step down to the matrix level.

Most of this flexibility *is* needed to build the varying range of effects common in today's games. Once you get into complex shading, you'll need to know a lot about how camera space and the absolute space interact, etc.

It's an abstraction where you can provide a toy learning API, but not an abstraction that you can skip entirely. Soya3D attempted the same thing and then gave up and added matrices at the API level because it's a needed thing.

Megadrive
Sep 6, 2011

h_double posted:

I've been tooling around with HaxePunk a bit and it looks extremely snazzy.

...

It looks like there is also a Haxe Flixel port (HaxeFlixel) if Flixel is your thing.

I checked out both of these because of your post and HaxeFlixel seems a lot less intuitive than HaxePunk. I loved both libraries in AS3 and eventually settled with Flixel, but HaxePunk is literally so drat easy to use. Thanks so much for the reference. Just having a bit of trouble trying to get a particle emitter going, not many tutorials isn't very helpful unfortunately. :(

edit: I worked it out! HaxePunk is really great.

Megadrive fucked around with this message at 12:01 on May 9, 2013

That Turkey Story
Mar 30, 2003

Splat posted:

I didn't say it would have different results. I'm saying I work in row major stuff all day and you normally end up putting your point on the left, not the right.
Okay, I see. I thought you just meant that you were changing storage order, not access. In other words you use row-vectors as opposed to column-vectors, work with the transpose of what I would call a transformation matrix, and order everything the other way around? Is this common? Is this at a company or just an independent project? I think I can understand expecting the reverse order of parameters in that case, though the library still needs to pick one convention since that's all thankfully abstracted away. Whichever order I use, I want that order to be recognizable from the name. You shouldn't have to care about backend details like that at a high level.


Suspicious Dish posted:

You can build simple tools on top of matrices, like "turn left", "turn right", but you can't really expect people to grok 3D graphics without understanding the idea of relative and absolute spaces
I'm not sure what you're trying to say here. What I see is analogous to "you can abstract away memory management but it's still helpful to have a good understanding of memory." Yes, of course, but that doesn't mean that the abstraction isn't useful, just like smart pointers and garbage collection aren't useless. You shouldn't have to think about matrices and storage order and low-level mathematical operations to program most games. The more low-level you go, the more you will need to think about, of course, but that doesn't at all mean that you should forgo abstraction.

It's also not like there's no such thing as relative and absolute spaces anymore when you use the library. Rather, that's precisely what is there, it's just that the concept is intentionally divorced from the representation as matrices. Transformation between spaces isn't intrinsically linked to matrices, and ultimately matrices can only even represent linear mappings between spaces. We just use matrices because they are able to represent the most common kinds of transformations that are used in graphics and games in a single type that is able to be worked with efficiently (though even then, not all types of transformations used in modern games can be done simply with matrix multiplication).

Suspicious Dish posted:

That still doesn't answer the question "relative to what". What if I want to have a billboard sprite that always faces the camera? (Still common even today in games, but more cleverly hidden). If your simple API only allows me to specify transforms in the absolute space, I need to step down to the matrix level.
I don't know why you are thinking it can only deal with a single space. Each "Transformation" still represents a transformation between spaces just like a transformation matrix does. The "transform" operation is internally accumulating these transforms. The overall operation itself is mapping the point into another space. You're not losing anything by the abstraction, you're just not directly dealing with matrices at the top level. You're just directly saying "go from this space to this space via a translation and a rotation."

Suspicious Dish posted:

It's an abstraction where you can provide a toy learning API, but not an abstraction that you can skip entirely. Soya3D attempted the same thing and then gave up and added matrices at the API level because it's a needed thing.
I do have tensors at the API level, you just don't need to deal with them for transformations. That said, a matrix is still a valid "Transformation" type that you can use directly with the transform function if you really wanted to, but I directly provide translation, scale, rotation, reflection, shear, and projection, so in practice you'd rarely if ever have to (and even then, it's trivial to just make a higher level Transformation type).

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I agree with your "memory management is hard, so don't bother with abstractions" statement partially. Before "the object model" that every system uses today, there were other abstractions over memory that didn't quite work out in practice. It was a tough problem, but not an unsolvable one, and we pulled through successfully.

Similarly, "visual programming" would be a similar problem that hasn't seen a good solution. The existing "visual programming" solutions aren't very good, but I'll hold out faith that it's a research topic, and a newer system might prove the concept.

I'm simply expressing skepticism at your claim to abstract away fundamentally complex topics into a general-purpose usable API. I want to stress not that it can't be done, but that your abstraction may not be as good as you claim, considering I can see obvious parallels to Soya3D and other game engines, and their experience shows it didn't quite work out.

I think I'm getting farther away from the C++ metaprogramming I originally complained about, on which I think we're going to have to agree to disagree.

Splat
Aug 22, 2002

That Turkey Story posted:

Okay, I see. I thought you just meant that you were changing storage order, not access. In other words you use row-vectors as opposed to column-vectors, work with the transpose of what I would call a transformation matrix, and order everything the other way around? Is this common? Is this at a company or just an independent project? I think I can understand expecting the reverse order of parameters in that case, though the library still needs to pick one convention since that's all thankfully abstracted away. Whichever order I use, I want that order to be recognizable from the name. You shouldn't have to care about backend details like that at a high level.

It's company, not personal. No idea how common.

I Lost My Password
Nov 12, 2009
I have recently started a new project, we are building a RPG in the Final Fantasy style with me as dev/design and a friend as the writer. In Previous projects I have written my own engines but I always end up spending more time playing with my code than doing the hard work of game design. At the moment I just want something that provides all the boring bits for me and lets me focus on being a designer.

I am giving RPG Maker VX Ace a try. I was very pleased to see that RPG Maker uses Ruby for all its scripting but I am very not pleased with how it stores the files or how bad the interface is (seriously, naming your tabs 1, 2 and 3?!). That said it seems to provide everything I need for my current project which is better than my homebrew code base.

Most of the interface and game code will end up being modified during this process and I want to keep track of my changes. I am using Git to keep my files in order but since they are serialized Ruby objects I don't get meaningful diffs. I have found a script that will import/export all of the scripts in the game to a folder(http://forums.rpgmakerweb.com/index.php?/topic/9430-script-importexport/) which is quite useful but only covers the actual ruby scripts in the game. I would like to keep plain text copies of all my configuration files, the ability to modify them and use the changes would be cool but that is less important than keeping track of changes. Has anyone had experience doing this? Depending on how well Ruby can serialize to text this will be easy to super lame.


tl;dr: Does anyone here use RPG Maker and if so how do you manage your files and changes?

Winkle-Daddy
Mar 10, 2007

aerique posted:

It's been a while but try some float values instead of the integers you're using now, so instead of 15 try for example 15.35. Or just stay between 0 and 1.

From what I recall from perlin noise is that it is made to wrap around so 0 and 1 it will indeed always return 0.

This was a page back, but I just wanted to give a shout and huge thanks for helping me fix my stupid!

code:
>>> from noise import pnoise2
>>> x = pnoise2(1.2,.4)
>>> print(x)
0.0137656033039
The only thing I'm not really grasping here is, I was under the impression the values I supplied to pnoise2 were the X and Y coordinates, so that I could do something like:
code:
for x in xrange(0,100):
    for y in xrange(0,100):
        object = pnoise2(x,y)
        if(object > .2):
            <instance some tile>
        elif(object < .2):
            <instance something else>
        else: etc
Perhaps I'm not understanding how this module is used, still?

Yes, I read the discussion on procedurally generated worlds, I'm just trying to learn how to generate random worlds to test different scenarios :)

Winkle-Daddy fucked around with this message at 17:33 on May 10, 2013

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
Shader question. I am fairly inexperienced with using shaders, but I seem to be getting the hang of it. Last night I realized I wanted a 'hue shift' shader for various effects (like an enemy flashing when they're hit).

I implemented this by adding a new float to by vertex data and then passing it in using some semantic that I wasn't using (BLENDWEIGHT, I think, which is a single float). The shader then uses that value--between 0 and 1--to implement a hue shift (e.g., a hue shift of .5 would make something red into something green).

So, uh, it worked pretty well the first try. Is what I did standard practice, just grabbing some random unused semantic and adding extra data to vertices to use for whatever you want in a shader?

Spatial
Nov 15, 2007

quote:

Is what I did standard practice, just grabbing some random unused semantic and adding extra data to vertices to use for whatever you want in a shader?
The proper way is to use uniforms, which you can set by name from your main program.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!

Spatial posted:

The proper way is to use uniforms, which you can set by name from your main program.
Google reveals that this is an OpenGL/GLSL term, I'm using DirectX/HLSL. However it seems like the equivalent is to set a variable in the shader. But that seems weaker since it requires another pass for everything that has a different hue. I'm not hue-shifting the entire game, I'm hue shifting individual sprites.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Orzo posted:

So, uh, it worked pretty well the first try. Is what I did standard practice, just grabbing some random unused semantic and adding extra data to vertices to use for whatever you want in a shader?

Yes. Vertex data doesn't have to be coordinates. It doesn't have to be anything, really. As long as you eventually get something done through the fragment shader, it's fine.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
I see. Why even confuse things with semantics in the first place, then? What is the point in *requiring* that I label a certain chunk of vertex data 'oh this is a COLOR4' but then using the 4 values for something completely custom?

aerique
Jul 16, 2008

Winkle-Daddy posted:

The only thing I'm not really grasping here is, I was under the impression the values I supplied to pnoise2 were the X and Y coordinates, so that I could do something like:
code:
for x in xrange(0,100):
    for y in xrange(0,100):
        object = pnoise2(x,y)
        if(object > .2):
            <instance some tile>
        elif(object < .2):
            <instance something else>
        else: etc
Perhaps I'm not understanding how this module is used, still?

I'm not familiar with the module and it has been a couple of years since I have played with perlin noise, perhaps someone more knowledgeable will correct me, but...

pnoise2 is a two-dimensional function: you supply it with two values and it will return a value between -1 and 1:

  • pnoise(x1,y1) = z1
  • pnoise(x2,y2) = z2

If x2 and y2 are near x1 and y1 then z2 will be near z1.

The code you posted should work, except I would do the object assignment as such:

code:
object = pnoise2(x/100,y/100)
You can ofcourse offset, rotate and scale the vector with which you move through "perlin noise space", with which I mean you do not have to directly map the X and Y from your world to the input values of the pnoise2 function.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Orzo posted:

I see. Why even confuse things with semantics in the first place, then? What is the point in *requiring* that I label a certain chunk of vertex data 'oh this is a COLOR4' but then using the 4 values for something completely custom?

Because history, because there wasn't always such a flexible system and because clever people like yourself developed techniques that worked fine, and there isn't really a need to introduce a new API that loses the specifics when all it will do is break on old systems, and give new churn for everybody to port to.

Doesn't stop the GL people from trying, though.

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
Yeah, I kind of guessed the reasons were historical. Anyway, thanks!

Winkle-Daddy
Mar 10, 2007

aerique posted:

I'm not familiar with the module and it has been a couple of years since I have played with perlin noise, perhaps someone more knowledgeable will correct me, but...

pnoise2 is a two-dimensional function: you supply it with two values and it will return a value between -1 and 1:

  • pnoise(x1,y1) = z1
  • pnoise(x2,y2) = z2

If x2 and y2 are near x1 and y1 then z2 will be near z1.

The code you posted should work, except I would do the object assignment as such:

code:
object = pnoise2(x/100,y/100)
You can ofcourse offset, rotate and scale the vector with which you move through "perlin noise space", with which I mean you do not have to directly map the X and Y from your world to the input values of the pnoise2 function.

This feels kind of "wrong" from a usability standpoint, but that helped a lot. After reminding myself of some of the odd ways python treats floats by default, I think this is what I'm looking for:
code:
>>> a = pnoise2(float(1/100.),float(5/100.))
>>> print(a)
0.00999013800174
Thanks aerique, you are a scholar and a gentleman!

Just in case someone else saw this and wants to try something similar, here is a bare bones example:

code:
from noise import pnoise2

WORLD_SIZE = 10
for x in xrange(0,WORLD_SIZE):
	for y in xrange(0,WORLD_SIZE):
		print(pnoise2(float(x/100.),float(y/100.)))
Outputs a whole bunch of float values and is quite fast!

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suspicious Dish posted:

Because history, because there wasn't always such a flexible system and because clever people like yourself developed techniques that worked fine, and there isn't really a need to introduce a new API that loses the specifics when all it will do is break on old systems, and give new churn for everybody to port to.

Doesn't stop the GL people from trying, though.
The reason is that the coordinate streams were at one point bound to a fixed number of slots which had specific uses related to the fixed-function pipeline. Neither of the APIs require them to be named after those things any more: OpenGL has deprecated everything but the "attrib" streams, and D3D10+ binds semantics by name at the API level which means you can name them whatever you want.

It sounds like you're using OpenGL, in which case you should just use attrib streams exclusively. Mixing attrib and named streams is actually a bad idea because while the spec allows you to do it, NVIDIA has a non-compliant implementation that aliases some named streams to attrib streams to support badly-written software that was dependent on that behavior.

OneEightHundred fucked around with this message at 23:33 on May 10, 2013

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!
As noted above, I'm using DirectX, not OpenGL. But it's DX9, which is probably why I still have to worry about semantics, according to what you're saying.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
If I remember correctly, DX9 isn't an issue of it being historical as much as that DX9 still has the fixed-function pipeline and stream sources are still bound to the fixed-function enumerations. In other words, they're named what they are because if you don't bind a vertex shader, that's what those values will be used for, and in turn, what they're referred to as in the C/C++ API.

OneEightHundred fucked around with this message at 17:30 on May 11, 2013

a slime
Apr 11, 2005

That Turkey Story posted:

:words:

Hey, I just want to say that what you're doing is really interesting, and I would love to see more generic programming stuff applied to gamedev. Hope you post more stuff in the future.

That Turkey Story
Mar 30, 2003

a slime posted:

Hey, I just want to say that what you're doing is really interesting, and I would love to see more generic programming stuff applied to gamedev. Hope you post more stuff in the future.

Thanks. I'm much more used to getting criticized for trying to do generic stuff and use boost or templates in general in game development. It's sort of a part of the culture. I'm optimistic about the usefulness of this getting over that though, and hope that I'll be able to show more people that it's a worthwhile effort. Right now I'm working on docs and consolidating some other sublibraries into one repository, and then I'll get it online.

Tres Burritos
Sep 3, 2009

Does anyone have any good links on terrain generation they'd like to share?

My cursory googling brings some stuff up, not all of it is wholly readable and not all of it is wholly useful.

I figure I'll start small with 2d maps and then work my way up if that helps narrow it down at all.

Grocer Goodwill
Jul 17, 2003

Not just one kind of bread, but a whole variety.

That Turkey Story posted:

I'm much more used to getting criticized for trying to do generic stuff and use boost or templates in general in game development.

And, honestly, rightly so.

This kind of "abstraction" is not helpful. To begin with, it doesn't really save you having to understand the underlying math. You still have to understand for example what a transformation is, what a dot product is, and when and why to use them. All this kind of wrapper code does is save you from having to know how to do a matrix multiplication or how to form a quaternion, which is a minor gain at best and dubious one at that. Useful abstractions are at a much higher level than this -- things like "look at", or "move to", or "follow".

Secondly, compile times and debug build speed are real issues. Adding a ton of template boilerplate to nearly every source file is nowhere near worth the small gain in genericness you're getting. And that genericness isn't even that useful. You pick a handedness, orientation, and row/column convention and stick with it. Being able to swap those around on the fly makes things more confusing, not less.

Having a bunch of temporary objects flying all over the place when you're trying to do something relatively simple doesn't help anyone. Yeah, you can hope the compiler is smart enough to optimize it all away, but an unusably slow debug build is no good either.

This kind of expression template stuff is interesting in an academic sense, but I wouldn't want anything like it anywhere near any actual engine code I was working on.

Paniolo
Oct 9, 2007

Heads will roll.

Tres Burritos posted:

Does anyone have any good links on terrain generation they'd like to share?

My cursory googling brings some stuff up, not all of it is wholly readable and not all of it is wholly useful.

I figure I'll start small with 2d maps and then work my way up if that helps narrow it down at all.

I wrote this about a year ago when I was toying around with a terrain engine. It's 3d and voxels so a bit more complex than what you're asking for but the basic concept of starting with simple mathematical terms and combining them to get increasing complexity is the same pattern.

http://code-freeze.blogspot.com/2012/04/rebuilding-world-one-iteration-at-time.html

Adbot
ADBOT LOVES YOU

Max Facetime
Apr 18, 2009

That Turkey Story posted:

C++ code:
  auto const transformed_point
    = transform( scale( width = 2.0, height = 4.0 )
               , translation( left = 0.5 * meters )
               , rotation( roll = 60.0 * degrees )
               , position                           // What to transform
               );
}
For example, here it will double the width and quadruple the height, leaving the depth unchanged. It will then translate within that new reference frame (so the translation effectively would go left by 1 meter, not 0.5 meters, which is the main "tricky" part that I'd like to better convey directly within the code).

Just like matrix multiplication, the order is not commutative. If you want the translation to not be affected by the scale, for instance, you do it before the scale parameter.

That really is the sticking point: transform( scale( width = 2.0, height = 4.0 ), translation( left = 0.5 * meters ), position); versus transform( translation( left = 0.5 * meters ), scale( width = 2.0, height = 4.0 ), position);

One of those is what I want, the other one is completely wrong. The value of having a higher-level API should be in the API making it obvious what I need to do to get what I want. Or failing that, having some clues that help me figure out which one is which.

Here's a slightly changed version with the goal of sidestepping the problem:

C++ code:
  auto const transformed_point
    = transforms( scale( width = 2.0, height = 4.0 )
                , move( left = 0.5 * meters )
                , rotate( roll = 60.0 * degrees )
                ).apply_on( position );
}
The word transforms implies an order-independent list of transformations, which are all applied conceptually at the same time. This means move always does what it says, like it should.

Implementing fuller support for going between frames of reference is left as an exercise for the author :)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply