Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
giogadi
Oct 27, 2009

Alright, I'm about to go batshit-insane here. I'm trying to build a very, very easy example application using FLTK and OpenGL.

I start by making my own subclass of FLTK's "Fl_Gl_Window" class, basically just specifying a draw() method and a constructor (which calls the baseclass's constructor). Below is what's in MyGlWindow.h (minus includes and such):

code:
class MyGlWindow : public Fl_Gl_Window
{
    void draw();

public:
    MyGlWindow(int w, int h, char*l)
        : FL_Gl_Window(w, h, l) {}
};
If it seems simple, it's because it is. draw() is implemented in the .cpp file, and for my constructor I simply call the baseclass's constructor explicitly using argument specifying the window's width, height, and title.

Now, I'm getting two errors when I try to compile this:

C2512 - 'Fl_Gl_Window' - no appropriate default constructor available
C2614 - 'MyGlWindow' - illegal member initialization - 'Fl_Gl_Window' is not a base or member.


So wtf. The default constructor error shouldn't be happening because in my main function, the ONLY time any window is constructed, it's with those parameters (I never try to make a MyGlWindow without specifying constructor parameters). As for the second error, IT SAYS RIGHT THERE IN THE CLASS DEF that Fl_Gl_Window is a base class of MyGlWindow.

I've been looking up and down and backwards and forwards and even delved into FLTK's source to see if there was anything I could do, but alas I've been beaten. Hell, just for sanity's sake, here's the code from my main() function:

code:
int main(int argc, char** argv)
{
    MyGlWindow* window = new MyGlWindow(320, 320, "FLTK OpenGL Test");
    window->end();
    window->show();
    return Fl::run();
}
Simple, really. I've come to the conclusion that I'm literally retarded and simply can't see what I'm doing wrong here. Can anyone see what I may be doing wrong? If it helps anything, I'm using VC++2008.

Thanks for your time, everyone.

Adbot
ADBOT LOVES YOU

giogadi
Oct 27, 2009

OddObserver posted:

wisdom

:saddowns:

giogadi
Oct 27, 2009

I've got a object-oriented design question for y'all. Let's say I'm building a robot collision detection system like so:

code:
class Obstacle
{
  virtual bool isInCollision(const Robot& robot) = 0;
};

class Circle : Obstacle
{
  // Overrides parent's general collision function
  bool isInCollision(const Robot& robot);
};

class Square : Obstacle
{
  // Overrides parent's general collision function
  bool isInCollision(const Robot& robot);
};
This works very well so far. Now, I'd like to add code to render these obstacles. Obviously, the best way to do this would be to add a virtual render() method to Obstacle and then specialize it in each of the subclasses. However, I would prefer to separate my obstacles' rendering code from these general collision-detection classes (this way I can minimize dependencies when I'm not interested in rendering the obstacles).

What would be the best way to do this? My best guess so far would be to define (in a separate module) some classes like:

code:
class Renderable
{
  void render() = 0;
};

class RenderableCircle :  Circle, Renderable
{
  void render();
};

class RenderableSquare: Square, Renderable
{
  void render();
};
Then, I could define some function like

code:
void render(Obstacle*)
which would do some nasty downcasting in order to get the behaviour I want. Is this there a more elegant way to add polymorphic rendering functionality without (1) downcasting or (2) having to add the rendering code directly to the collision-detection code?

giogadi
Oct 27, 2009


Sup robotics buddy :hfive:

There's some weirdness in RRT's performance sometimes due to its randomness, so it can be hard to pin down issues. If you post your full source code I can try to help you out. RRTs and related algorithms are my primary research area.

giogadi
Oct 27, 2009

Praseodymi posted:

This is my code:
code:
		Vector2D xRand;
		xRand.x = rand() % width;
		xRand.y = rand() % height;

		Vector2D xNear = points[0];

		for ( unsigned int j = 0; j < points.size(); j++ )
		{
			if ((points[j] - xRand).sqrLength() < (xNear - xRand).sqrLength())
				xNear = points[j];
		}

One thing I did notice was that, the way you've implemented the point sampling, you're actually sampling from a fixed, discrete grid of points in space. The coarseness of your grid depends entirely on the size of the space; therefore, the larger the width and height, the finer the grid will be relative to the space. This might explain why the method's performance seems to improve with larger maps.

You should change the sampling to actually sample from RAND_MAX possible values. This way the performance will be less sensitive to the size of your map.

code:
xRand.x = width * ((double) rand() / RAND_MAX);
This'll work better, but it still isn't the best way because the standard rand() function on most systems has really lovely randomness properties and RAND_MAX can be literally anything greater than 32767. Boost's random library is great for stuff like this.

About VS hanging at the sqrLength() line: how did you test for "hanging"? Did you just break execution randomly and find it was always in that spot? On typical problems RRT spends the most time deciding whether a point is in collision or not, but in some cases the most expensive part is that distance comparison, which would explain how often the program happens to be there when you cut execution.

giogadi
Oct 27, 2009

The Gay Bean posted:

I have a custom data class that is formed for and used by a specific algorithm. The data backing it is just a collection of 1D and 2D STL vectors and a cv::Mat (which can be stored using a 2D STL vector). I coded up serialization using boost::serialization because it's easy to read and fairly standard, but it turned out to be slow as poo poo in debug mode - taking several minutes to write / read a 1 GB archive. This is reduced to like 10 seconds in release mode, but for the time being I need to run the code in debug mode a lot, and I can't put up with those kinds of wait times.

I'm sure I could write something that works solely on ofstream objects that would be 100 times faster, but it would be messy and harder to read. But is there some magical compiler flag I'm missing (code is cross-platform MSVC 8 / GCC) or a library that does almost the same thing but isn't slow as poo poo, so I don't have to reinvent the wheel next time I want to serialize a data structure?

edit: I already turned iterator debugging off on MSVC8

Boost::serialization is weird. On the one hand you do get nice DRY ways of saving and loading stuff and it even supports STL; on the other hand, its binary mode is platform dependent, it's slow as poo poo, and they still can't seem to make floating point numbers roundtrip.

If your code is already entrenched in boost::serialization and you just want more speed, look into setting the flag that stops boost from doing extra bookkeeping to serialize pointers. Unfortunately I'm on my phone now and can't find the flag in the docs, but basically as long as you aren't serializing pointers you can get more speed this way.

Another choice is to just use a different library. I haven't tried it out yet, but I've heard that protocol buffers are very fast. This would however require you to do some rewriting.

giogadi
Oct 27, 2009


This is actually a super interesting question but might be out of the scope of this thread. For what it's worth - I'm certain that fitting an affine transform to a dataset has been done before, but without doing any research, you might try a hybrid approach? I.e., use least squares to get an initial solution, then toss that initial solution into a local nonlinear optimizer like Levenberg-Marquardt (Ceres has this actually) where you use your constraint as your objective function.

This is fun to think about but my idea's probably horrible and this is definitely a derail so I'll stop here.

giogadi
Oct 27, 2009

hooah posted:

Does anyone here have any experience using OpenCV with Visual Studio? I'm having a variety of problems I'd like to have help fixing and the OpenCV answers section isn't being helpful. Email's in the profile if you'd be so kind.

I can probably help, but I couldn't find your email address in your profile.

giogadi
Oct 27, 2009

hooah posted:

I need to be able to grab a word from an input sentence and look up counts and probabilities from these two tables. I can set up 2-d arrays for the tables, that's no problem, but getting the indices into those tables is proving problematic. I thought of making enums for the words and tags, but that would leave me making a bunch of if/then statements to figure out which word is being looked at. Is there a standard/normal way to do something like this?

You can use a hash table (unordered_map, C++11 only) or a map for this. For the map, your keys could be strings representing each of the possible words or tags.

giogadi
Oct 27, 2009

Blotto Skorzany posted:

Sed contra,

C code:
#include <cat-in-the-hat.h>

:(

code:
$> gcc seuss.c
seuss.c:5:1: error: stray ‘#’ in program
seuss.c:5:4: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘{’ token
seuss.c:6:5: error: stray ‘#’ in program
seuss.c:7:10: error: stray ‘#’ in program
seuss.c:8:1: error: stray ‘#’ in program
seuss.c:9:6: error: stray ‘#’ in program
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:9:11: error: unknown type name ‘end’
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:11:1: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘define’
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:11:1: error: unknown type name ‘define’
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:35:29: error: expected identifier or ‘(’ before ‘{’ token
seuss.c: In function ‘grem’:
seuss.c:38:57: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
seuss.c: In function ‘cred’:
seuss.c:46:183: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
seuss.c: In function ‘main’:
seuss.c:56:266: error: stray ‘#’ in program
seuss.c:56:272: error: expected ‘)’ before ‘;’ token
seuss.c:56:281: error: expected ‘;’ before ‘exit’
luis@robotics3:~/Desktop$ gcc seuss.c
seuss.c:5:1: error: stray ‘#’ in program
seuss.c:5:4: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘{’ token
seuss.c:6:5: error: stray ‘#’ in program
seuss.c:7:10: error: stray ‘#’ in program
seuss.c:8:1: error: stray ‘#’ in program
seuss.c:9:6: error: stray ‘#’ in program
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:9:11: error: unknown type name ‘end’
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:11:1: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘define’
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:11:1: error: unknown type name ‘define’
seuss.c:11:1: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:12:6: error: stray ‘#’ in program
seuss.c:35:29: error: expected identifier or ‘(’ before ‘{’ token
seuss.c: In function ‘grem’:
seuss.c:38:57: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
seuss.c: In function ‘cred’:
seuss.c:46:183: warning: incompatible implicit declaration of built-in function ‘printf’ [enabled by default]
seuss.c: In function ‘main’:
seuss.c:56:266: error: stray ‘#’ in program
seuss.c:56:272: error: expected ‘)’ before ‘;’ token
seuss.c:56:281: error: expected ‘;’ before ‘exit’

giogadi
Oct 27, 2009

I seem to recall some random trivia that, if you want to represent a 3D Vector as a struct, it can be advantageous to actually store 4 values:

code:
struct Vector3d {
    float x, y, z, unused;
};
Has anyone else heard this? My only guess here is that this helps guarantee that your vectors will fit neatly in a 32B cache line (because a 12-byte vector would not evenly divide 32B). This means that each vector access would require only 1 cache read. But there's a tradeoff here, right? It seems like you can't store as many vectors in the cache if each one is bigger.

What do y'all think?

giogadi
Oct 27, 2009

cheetah7071 posted:

Are you sure this isn't because of quaternion math? Rotating vectors is really easy when the number of dimensions is a power of two, so the usual way it's handled is to pretend your vector is four-dimensional, rotate it, and then go back to ignoring the fourth dimension. It's used all the time in computer graphics.

I’m not sure - I always thought the fastest way to rotate a 3d vector is just to multiply with a 3x3 rotation matrix. Like, you can directly rotate a vector with a quaternion but it’s about as fast as converting the quat into a rotation matrix and just rotating that way. But I might be wrong!

giogadi
Oct 27, 2009

Absurd Alhazred posted:

Well, depending on the compiler, it might be able to automagically compile a lot of vector code into SSE intrinsics. If you use a library dedicated to 3d-graphics-adjacent vector math, like GLM, you can enforce this through template parameters, even.

more falafel please posted:

Yeah, it's for vector intrinsics, which usually need their arguments to be aligned on 4-word/16-byte boundaries.

Thanks! I'm trying to experiment to make sure I understand the concept correctly. It turns out that just putting 4 values in the struct does not guarantee 16-byte alignment of the struct. Here's an example that shows this:

code:
#include <iostream>

struct Foo {
    float w,x,y,z;
};

int main() {
    char x;
    Foo f;
    std::cout << &f << std::endl;
}
The output on my machine is "0x7ffee84cf8c8", which is not 16-byte aligned. Does this mean that the compiler will never vectorize my code unless I explicitly add "alignas(16)" to the Foo declaration, or is that something the compiler could also handle on its own?

giogadi
Oct 27, 2009

Yeah, that was super helpful. Much appreciated!

giogadi
Oct 27, 2009

Ruzihm posted:

splay with treez nuts

:vince:

giogadi
Oct 27, 2009

Not to mention lldb is a huge buggy piece of poo poo. I use an old macbook for game dev and I’m thinking of switching to gcc just so I can maybe have better luck with gdb

giogadi
Oct 27, 2009

Plorkyeran posted:

You can use gdb with clang and lldb with gcc.

Oh really? For some reason I’ve had a lot of trouble getting gdb to work on my clang builds on mac

giogadi
Oct 27, 2009

Seconding imgui. I no longer dread doing UI stuff for tools.

giogadi
Oct 27, 2009

I do dev on an old laptop and my compilation times in c++ are just excruciating. The worst cases of course are if I change a declaration in a header file that a lot of translation units refer to, which causes all those files to recompile.

Just thinking aloud: the alternative to using a common header file would be to define those util functions as free functions with external linkage and then let the linker handle it instead. I realize there are issues with this approach, but it would probably mean way fewer recompiles when those util functions change, right? Or does the time saved in compilation get eaten up at link time or something?

giogadi
Oct 27, 2009

Sorry, by “changing declarations” I meant harmless stuff like adding new symbols or deleting unused ones. Even changing an existing declaration would only require changing the files that use that declaration, which would already be a huge win (theoretically) for recompile times

giogadi
Oct 27, 2009

Sorry, I think I was being unclear. This is what I'm suggesting:

code:
// foo.cpp
int HowdyPardner() {
    return 5;
}
code:
// main.cpp
extern int HowdyPardner();

int main() {
    int x = HowdyPardner();
    return x;
}
This avoids the use of a header file for main.cpp to use the functions in foo.cpp. If I add more functions to foo.cpp, it won't require recompiles of main.cpp. Like I said, this of course has issues like requiring to keep the extern declarations in-sync with the definitions. My question is: are there any other downsides to this approach that I'm not aware of? Obviously it becomes more problematic with lots of users of the code in foo.cpp, because if a function header changes, I'd have to go through and change not only callsites, but also extern declarations.

giogadi
Oct 27, 2009

Hmmmm modules indeed look like what I really want. I’ve heard people complain about them so I haven’t looked at them yet, but it might be worth at least experimenting with them a bit

giogadi
Oct 27, 2009

You’re 100% right that the simplest answer is to just get a faster machine, but in my naive heart it just feels wasteful to get a new computer when this 9 year old MacBook Pro is super fast still! Just compile times suck when making a change to a commonly included header.

Overall I tend to hold onto hardware until it’s actually unusable, and part of my reason for developing on this old thing is to ensure that other folks on old hardware will be able to run and enjoy my game

e: just to be even more annoying: I wonder all the time whether compilers, applications, games, etc would be better if dev shops just didn’t exclusively use the fastest machines available

giogadi fucked around with this message at 21:18 on Sep 2, 2022

giogadi
Oct 27, 2009

All the talk earlier about pointers and inheritance earlier makes me think: it’s interesting how polymorphism and pointers are so fundamentally linked. I.e., if you want a vector of polymorphic objects you’re just expected to do it as a vector of pointers (usually to dynamically allocated memory). Since this pattern is bad for cache usage, it would be nice if it were easier to do polymorphic stuff without each individual object being a dynamic allocation.

I know it’s possible: I’ve written for instance a container that exposes a “vector<Base*>” interface, but internally stores each derived type’s instances continuously in its own separate vector. But this was pretty ugly and difficult. it makes me wonder if there’s a way for a language to encode polymorphism to make something like the above way easier to do, or maybe even the default way to use polymorphism.

giogadi
Oct 27, 2009

Here’s a great treatment on the subject of polymorphic vectors:

https://johnysswlab.com/process-polymorphic-classes-in-lightning-speed/

But my main point is to wonder whether there are language features that could make this kind of thing easier - all of these solutions, while clever and effective, take a lot of work to implement

giogadi fucked around with this message at 14:40 on Nov 16, 2022

giogadi
Oct 27, 2009

New print in c++23? What was wrong with printf?

giogadi
Oct 27, 2009

In fact, C++ only started requiring two’s complement for signed numbers in C++20

giogadi
Oct 27, 2009

Sigh, kinda wish they had just made vector sizes a signed int. Who actually needs that last bit of storage afforded by 2^64 vs 2^63?

Unsigned arithmetic is important but I feel like it should be totally opt-in, not mandated by the most basic vector api

giogadi
Oct 27, 2009

It’s not like they’d actually allow negative sizes: the sizes are all controlled by the vector api. But then it would allow completely sane and sensible comparison with signed ints which is what people should be using all the time anyway. “Unsigned int tells people it can’t be negative” is not actually useful when the failure mode ends up being a silent overflow.

E: or underflow, loving whatever.

giogadi fucked around with this message at 18:13 on Jan 4, 2023

giogadi
Oct 27, 2009

Volte posted:

Unsigned int should usually mean "raw bit pattern" not "integer value that logically can't be negative". Sometimes even values that can't be negative can be subtracted from and you end up with really stupid bugs when the difference would be negative. Subtracting two vector sizes to get the difference between them? Better make drat sure to check which one's bigger first.

Absolutely correct. Better than I could phrase it

giogadi
Oct 27, 2009

roomforthetuna posted:

Non-serious counterpoint to signed sizes are good - Microsoft loving things up by using 32-bit size_t for posix file operations unless you go out of your way to #define some poo poo would be *even worse* if the accidental maximum size was ~2GB instead of ~4GB.

(Though yuck, ssize_t and size_t, instead of size_t and usize_t)

Hahaha what the gently caress

giogadi
Oct 27, 2009

Ihmemies posted:

I just wanted to drop by and state that bit operators in C are pure hell. Like I have to swap signed char pointer's bits around. 01001001 to 10010010 etc.

I can swap values in an array, but understanding how to do that poo poo with bit operations is beyond me.

*ptr |= (1 << 1); *ptr |= 1; first_bit = (*ptr & 1); etc... maybe I'll understand that some day. Otherwise I'm just skipping this and hope I never have to manually adjust bit values again. I don't even want to know why anyone would ever want to do anything like this. I wish I could move on from C and C++ to something reasonable like rust or C#. These legacy languages are so goddamn sweaty.

Is this for a class?

It’s sweaty because you’re doing sweaty stuff. If you wanted to work with bits in any other language it would also be sweaty

giogadi
Oct 27, 2009

Ihmemies posted:

Yes. Thanks. Why would anyone want to work with bits though...

Now this is an excellent question. It’s your terrible class’s fault for failing to motivate this stuff. Of course you hate it if you don’t even know why you’re mucking around in it.

There are a million uses for bitwise operations. One of the fun uses is that sometimes, you can store data in bits way more efficiently than anything else.

For example, if you have a long array of data, where some items are occupied/valid and other items are empty/invalid, you can store the occupied/empty status of each element in a big ol’ bitstring. This is more efficient than storing a bool for each item because a bool is usually at least 8 bits.

Then you can do clever stuff like iterating over only the occupied items way faster than if you were iterating one-by-one over each item and checking “is this occupied or not?”

This is all fancy stuff to squeeze out performance/space. Probably most fancy bit operations are for this kind of thing. You also need to do bitwise operations when working in embedded systems or communicating with hardware without a fancy high-level api.

giogadi
Oct 27, 2009

rjmccall posted:

I keep having to reimplement bit vectors despite having access to the STL ones. The STL bitvectors are perfectly fine implementations, but they’re missing a lot of key operations. The only bitset-specific operation on std::vector<bool> is flip, and std::bitset only adds a handful of operations: any, all, &, |, and ^. Both are missing operations like “popcount”, “iterate all of the set bits”, and “find the lowest/highest set bit”, all of which can be done significantly more efficiently on bit vectors than a generic algorithm would reasonably get optimized to.

100%. Such a missed opportunity to not at least provide an iterator on set bits

giogadi
Oct 27, 2009

Ihmemies posted:

I have asked the question in every course - why does the tester use some old version of the language?

The answer has always been that the version X of the language offers the neccessary tools to solve the exercises.

Well, yes, thanks.

Know that I’m very angry for you

giogadi
Oct 27, 2009

It’s fine but don’t force your students to use it

giogadi
Oct 27, 2009

I'm being an idiot and overthinking a simple 4x4 matrix class. I'd like to do the following trick to enable getting Vector3's to each of the matrix rows without making copies; can the following cause problems on any platforms?

code:
struct Vector3 {
    float x, y, z;
};

struct Matrix4 {
    union {
        float _data[16];
        struct {
            float
                _m00, _m10, _m20, _m30,
                _m01, _m11, _m21, _m31,
                _m02, _m12, _m22, _m32,
                _m03, _m13, _m23, _m33;
        };
        struct {
            Vector3 _col3_0; float _col3_padding_0;
            Vector3 _col3_1; float _col3_padding_1;
            Vector3 _col3_2; float _col3_padding_2;
            Vector3 _col3_3; float _col3_padding_3;
        };
    };
};

int main() {
    Matrix4 mat;
    // <populate matrix>
    Vec3 v(mat._m01, mat._m11, mat._m21);
    Vec3 const& w = mat._col3_1;

    assert(v == w);

    return 0;
}
This "works on my machine", but I'm paranoid that some architectures/compilers might have padding quirks that break this

giogadi
Oct 27, 2009

Thanks for the feedback! I didn’t know about the weirdness around union aliasing - does this mean that all unions are suspect, or just the case where you’ve defined an object with one union element, then turn around and refer to it the object under a different union element?

I ask because I use tagged unions quite a bit as well.

e: so is it also sketchy to alias between a float* of the matrix data and the element-wise accesses like _m00? Man that’s super useful though.

giogadi fucked around with this message at 14:46 on Feb 18, 2023

giogadi
Oct 27, 2009

roomforthetuna posted:

Have you considered just using inlineable getter functions and helper classes/templates for this? You can also inline functions like _m00() or (as a template for less definition) _m<0><0>().

Absolutely - this is what I plan to do now. I went the union route initially due to laziness around implementing the online accessors, and partially curiosity. I’m glad I learned something new today about the weirdness around union aliasing. The game studio I work at does type punning all the time with unions so I had assumed it was perfectly safe.

Adbot
ADBOT LOVES YOU

giogadi
Oct 27, 2009

Xerophyte posted:

I'm not sure I strictly recommend rolling your own linear algebra library either. If you have very strict requirements around 3rd party code or if this is something you want to do for its own sake then, sure, go for it. Otherwise, chances are Eigen will do as well as anything you can write for any non-sparse use case.

This is absolutely a terrible idea, I know. I’m doing it for fun, but also for this compulsion to avoid libraries with hella templates and call depth. I’m working on a game on a super old laptop and I highly value fast compiles and fast performance in debug builds; so I don’t need 95% of what most generic linear algebra libs provide, and I couldn’t find any libs without heavy dependence on templates and inlining, which is not great for compiles/debug performance.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply