|
Dren posted:Hard disagree. cmake is much nicer than a pile of crusty make garbage.
|
# ? Dec 17, 2020 18:45 |
|
|
# ? Jun 7, 2024 14:24 |
|
Well yeah, that's the point I was making. I like to pick the simplest tool for the job at hand. If you project's build needs are relatively simple enough, and does not have to support windows, then Make is typically going to end up being the less-worse solution. The source of bad experiences with Make seems to me because: a) When the build becomes more complex, you have to put in that behavior in a way that is not always compatible with Make's rule system. (multi-platform builds, specific library-version dependencies, etc) Hacking that in Make in an elegant way is a hard problem that seems to either rely on Make invoking external tools or some black magic script producing some impenetrable behavior. Something like CMake already hacked that for you, and is pretty good at sweeping that under the rug such that you do not have to think about it, If you're lucky. b) As a project's build requirements grows more complex, it's typically not going to be solved by moving to another build system. So the big ball of mud grows, until it is an atrocious monster no one wants to touch when it breaks. CMake, basel (gently caress basel), etc solve a lot more complex problems than 'rebuild that file because it is too old'. They are complex beasts because the problems they try to solve are messy and complex. Along the way they make assumptions about common use-cases and hide some of that complexity behind easy-to-use interfaces. So it's definitely easy for many use cases, but they're an impenetrable mess inside. If your project suddenly does not fit that use cases well anymore, or something goes wrong, it's a miserable experience. In Rich Hickey's 'simple made easy' terminology: CMake is easy & complex, Make is hard & simple. efb: Presto posted:I'll take crusty make garbage over baffling cmake garbage any day.
|
# ? Dec 17, 2020 18:51 |
|
IME make is easier for a small project but CMake is much better for large projects. I'm not sure exactly where the point of tradeoff is, but I've definitely run into it on projects that aren't all that large.
|
# ? Dec 17, 2020 18:54 |
|
CMake isn't even a competitor to make so I don't really know where this discussion is coming from. I've rarely seen a project that was literally just a hand-written Makefile, nor would I ever want to write or maintain that. CMake is a competitor to something like autotools, and I'll take CMake any day.
|
# ? Dec 17, 2020 19:05 |
|
Plorkyeran posted:I guess it is technically true. If you can get away with make you either have something very simple and it doesn't matter what you use or you have a very constrained environment where it might actually be okay. I would rather handwrite ninja tho.
|
# ? Dec 17, 2020 20:16 |
|
Beef posted:Well yeah, that's the point I was making. I like to pick the simplest tool for the job at hand. If you project's build needs are relatively simple enough, and does not have to support windows, then Make is typically going to end up being the less-worse solution. Or IDEs, or any sort of configuration whatsoever. Make is acceptable as a replacement for a 10-line shell script that you want incremental builds from and not much else.
|
# ? Dec 17, 2020 20:18 |
|
I mean if you have a build that only ever links against C libraries that already come with the system, and you don't need any configurability, Make is still bad and slow version of Ninja. If you have more requirements, using make is just selfharm.
|
# ? Dec 17, 2020 20:19 |
|
Beef posted:Well yeah, that's the point I was making. I like to pick the simplest tool for the job at hand. If you project's build needs are relatively simple enough, and does not have to support windows, then Make is typically going to end up being the less-worse solution. I know I won’t convert a cmake hater but even a simple project in cmake is easier than make, unless you don’t know cmake. Volte posted:CMake isn't even a competitor to make so I don't really know where this discussion is coming from. I've rarely seen a project that was literally just a hand-written Makefile, nor would I ever want to write or maintain that. CMake is a competitor to something like autotools, and I'll take CMake any day. I’ve worked on a project that was a big bunch of hand jammed makefiles and i’ve worked on an autotools project and I can tell you that cmake is way better than both. It sounds like OP is working on a rather large project that was all makefiles, so that’s where the discussion is coming from.
|
# ? Dec 18, 2020 01:56 |
|
Dren posted:It sounds like OP is working on a rather large project that was all makefiles, so that’s where the discussion is coming from. Yeah that's what it looks like.
|
# ? Dec 18, 2020 08:45 |
|
Is std:: vector:: insert generally faster than memmove part of an array to insert elements? I'm currently building an array by repeatedly inserting a bunch of elements but it's slow and I don't have any reason not to switch to std:: vector
|
# ? Jan 21, 2021 00:35 |
|
If you're copying the entire array every time you add an element then yes a vector will be noticeably faster - but your process of repeatedly inserting one element at a time into the middle is just inherently slow. The vector will still have to copy half the elements for every single insert. Alternative solutions you could try: - Use a linked list. Generally slow to access, excruciatingly slow to access by index, but once you've located your insertion point it's very quick to insert an element in the middle. - Append new elements to the end, then sort after you've added all the elements. - If all the elements are being inserted at the same place, move the existing elements to make space once, and then fill in all the gaps.
|
# ? Jan 21, 2021 01:04 |
|
feelix posted:Is std:: vector:: insert generally faster than memmove part of an array to insert elements? I'm currently building an array by repeatedly inserting a bunch of elements but it's slow and I don't have any reason not to switch to std:: vector do you know the number of/have all of the elements before hand? you can reserve w/ vector or calloc a nice block of memory ahead of time how fast does it need to be
|
# ? Jan 21, 2021 01:31 |
|
Sweeper posted:do you know the number of/have all of the elements before hand? you can reserve w/ vector or calloc a nice block of memory ahead of time I just statically allocate to a huge size, memory is not an issue. I'm implementing this in C++, with modification. It appears that the edge extraction is the slowest part by far, so any acceleration in that part would be helpful. I am extracting an array of unique edges from an array of triangles. I bet there's a faster way to do this on the fly during triangulation, but the triangulation routine I'm using is a black box to me right now and I'd prefer to keep it that way if possible. So, in order to ensure uniqueness, I loop through the array until I exceed the sorting value of the new edge I'm trying to add, break if I've found a match, and insert the new element if I haven't found one. After building up the array, I only need to access the data once per iteration, so it sounds like a linked list would be the way to go.
|
# ? Jan 21, 2021 04:45 |
|
Wait, this is solely to ensure uniqueness? Just use a std::unordered_set, and your implementation will absolutely fly compared to anything you can put together with lists or arrays.
|
# ? Jan 21, 2021 05:08 |
|
Jabor posted:Wait, this is solely to ensure uniqueness? Just use a std::unordered_set, and your implementation will absolutely fly compared to anything you can put together with lists or arrays. yes, nice, thanks!
|
# ? Jan 21, 2021 05:33 |
|
If you've got a total order on edges, using std::sort followed by std::unique is probably your best bet.
|
# ? Jan 21, 2021 07:43 |
|
Consider a flat_hash_map implementation from Abseil or F14 from folly if you want to go even faster.
|
# ? Jan 21, 2021 21:59 |
|
unordered_set is 30 times faster than my implementation. Thanks!
|
# ? Jan 23, 2021 08:02 |
|
You’re probably fast enough already, but you should really consider just sorting and uniquing.
|
# ? Jan 23, 2021 22:18 |
|
is there any ICU #define to have it only expose the C API when compiling with a C++ compiler?
|
# ? Jan 28, 2021 04:29 |
|
#ifdef __cplusplus gives you a section that won't be seen by a C compiler. You can do a lot with that. It's really tough to write that without a closing #endif.
|
# ? Jan 28, 2021 04:46 |
|
#undef __cplusplus # include <unicode/whatever.h> #define __cplusplus ah yea! thanks, works
|
# ? Jan 28, 2021 04:59 |
|
now onto writing a long comment block why the gently caress i need to mess with the compiler preprocessor symbols... its Temporary™ matti fucked around with this message at 05:04 on Jan 28, 2021 |
# ? Jan 28, 2021 05:02 |
|
matti posted:#undef __cplusplus
|
# ? Jan 28, 2021 08:32 |
|
Absolutely do not undef __cplusplus before importing a header, what is wrong with you
|
# ? Jan 28, 2021 08:37 |
|
rjmccall posted:Absolutely do not undef __cplusplus before importing a header, what is wrong with you Yeah, make sure you restore the original contents: code:
|
# ? Jan 28, 2021 10:33 |
|
matti posted:#undef __cplusplus I really want to hear the story behind this one.
|
# ? Jan 28, 2021 14:19 |
|
rjmccall posted:Absolutely do not undef __cplusplus before importing a header, what is wrong with you
|
# ? Jan 28, 2021 19:42 |
|
if you must at least tell us why, got blue balls here
|
# ? Jan 28, 2021 20:53 |
|
The things headers do in reaction to compiler-defined macros like __cplusplus are frequently required in order to express their interfaces correctly. The most obvious and important example of this is adding extern “C” around any function declarations, which is necessary to make calls to them work.
|
# ? Jan 28, 2021 23:47 |
|
yeah i woke up with a little of a hangover and dutifully decided to not do that
|
# ? Jan 29, 2021 00:22 |
|
e:nvm dumb question
feelix fucked around with this message at 04:19 on Feb 3, 2021 |
# ? Feb 3, 2021 04:15 |
|
I’m trying to do these SDL tutorials in Ubuntu and the programs won’t compile, when I try to compile this file it gives this error https://lazyfoo.net/tutorials/SDL/15_rotation_and_flipping/index.php https://pastebin.com/HsGLyTEh code:
Specifying std=c++11 does not help either icantfindaname fucked around with this message at 02:39 on Feb 10, 2021 |
# ? Feb 10, 2021 02:36 |
|
icantfindaname posted:I’m trying to do these SDL tutorials in Ubuntu and the programs won’t compile, when I try to compile this file it gives this error g++ -lSDL2 -lSDL2_image -I/usr/include/SDL2/ -o test 15_rotation_and_flipping.cpp works
|
# ? Feb 10, 2021 02:48 |
|
gcc will automatically recognize C++ source and compile it as such (based on the file name extension), but it won't automatically link it with the C++ runtime library. g++ will do that.
|
# ? Feb 10, 2021 03:06 |
|
Now I’m getting this error. I have the packages for libsdl2-image and libsdl2-image-dev both installed, but it doesn’t seem to recognize them. There is indeed a prototype for IMG_Load() in /usr/include/SDL2/SDL_image.h https://pastebin.com/eiEkCJ8E code:
|
# ? Feb 10, 2021 04:44 |
|
This probably won't help, but try changing it to g++ $(pkg-config --cflags --libs sdl2 SDL2_image) 15_rotation_and_flipping.cpp
|
# ? Feb 10, 2021 05:06 |
|
Okay, I got it working by changing the order around, thanks
icantfindaname fucked around with this message at 05:38 on Feb 10, 2021 |
# ? Feb 10, 2021 05:35 |
|
Do you guys have any favorite libraries for sparse matrices? I need to multiply and invert a large (thousands x thousands) matrix and I'm currently using Armadillo. Annoyingly, the slowest part right now is the fact that I need to knock out rows and columns after assembling the matrix, and that's the slowest part. It's a lot of memory to move around so I'm not surprised, and the correct way to solve this is to write my code to avoid having to do it, but I'm lazy and would like to avoid that if possible
|
# ? Feb 11, 2021 22:24 |
|
|
# ? Jun 7, 2024 14:24 |
|
Eigen has a SparseMatrix class, which I haven't used but I have used Eigen and it's generally solid. That said, my understanding is that it and Armadillo are pretty even outside of the small-matrix case (where Eigen is much faster), since they both call OpenBLAS or something like that for the actual algorithms on dense matrices. In Eigen's case you can compile it against a couple of different libraries, don't know about Armadillo. Not sure what Eigen does for sparse stuff exactly though. Far as I understand you generally have to write some careful code by hand to make data manipulation in sparse matrices efficient. You also really, really don't want to invert sparse matrices, ever. It is excruciatingly slow as there's absolutely no guarantee the inverse will be sparse so you basically doing dense matrix math on a hundred-million-element matrix in sparse matrix-storage. Try very hard to solve your problem with a Cholesky or LU decomposition or something. Xerophyte fucked around with this message at 00:28 on Feb 12, 2021 |
# ? Feb 12, 2021 00:25 |