|
Jabor posted:It sounds like what you're wanting to do is convert the string to a number, and then output that number as hexadecimal, right? I'm given a string that has a hex number inside it, so I guess what I really need to do is convert it to a number and then to decimal. An example string I'm given is "4D7E". e: that doesn't seem to work. I tried (atoi) atof(customer.id.c_str()) but it doesn't seem to convert correctly, probably because it expects 0-9 and no letters. enthe0s fucked around with this message at 00:58 on Jun 20, 2012 |
# ? Jun 20, 2012 00:09 |
|
|
# ? Jun 7, 2024 16:23 |
|
GrumpyDoctor posted:edit 2: I realized that I need a more sophisticated structure and found Boost.MultiIndex, so I'm going with that. Boost.MultiIndex is pretty much always the answer when it comes to any kind of associative container. It even does map better than std::map in my opinion, and if you need something more highly constrained, building it around multi_index_container is better than making it from scratch.
|
# ? Jun 20, 2012 01:18 |
|
enthe0s posted:I'm given a string that has a hex number inside it, so I guess what I really need to do is convert it to a number and then to decimal. An example string I'm given is "4D7E". strtol allows you to specify a base for the conversion. You can also do it with streams - reading from a stream with std::hex set, into an integer, will treat the input as a hexadecimal number and convert it appropriately.
|
# ? Jun 20, 2012 01:25 |
|
Jabor posted:strtol allows you to specify a base for the conversion. Exactly what I needed! Thanks!
|
# ? Jun 20, 2012 04:21 |
|
Ok, another question. I need reproducible real-number math across compilers and operating systems. I tried for a while to get the built-in floating-point calculations to always give the same result on windows/MSVC and linux/g++ but that seems to be a Pandora's box of black magic tricks and even if I could get it to work I don't really want to rely on such an unstable setup. Can anyone recommend me a library for this? It's for a game so I don't care much about accuracy. I don't care if it's fixed-point or floating-point. If it has a C++ "drop-in float-replacement class" that's a bonus but I can always make one myself if it's a C library or whatever. The only things I need are for it to be reasonably fast, support a few functions (sin, cos, atan, sqrt, conversion to & from float/int) and the aforementioned determinism. I really don't want to have to write this stuff myself, but so far I haven't been able to find anything suitable. The fixed-point libraries I can find all seem to lack basic math functions. The only thing that did seem promising was MPFR (and the C++ wrapper), which is good and totally works, except it's variable-precision: I think it's really designed for accuracy and working with thousands-of-bits floating-point representations; when I just want a standard 4-byte float, as far as I can tell, the overhead of variable-precision means MPFR operations are several hundred times slower than the built-in float operations... I don't need blinding speed but that's not good enough! (maybe I'm doing something wrong...? I mean, it seems really slow, even if all the functions are having to do a bunch of unnecessary work just in case the inputs are 1000 bit floats instead of 32) Any advice? Thanks! seiken fucked around with this message at 22:24 on Jun 21, 2012 |
# ? Jun 21, 2012 22:10 |
|
A lot of the variability in floating-point results happens because of differences in hardware. So chances are any truly portable implementation is entirely a software implementation, which means some pretty poor performance. On the other hand, if you can constrain what hardware you're using (for instance, if you can assume x86 w/ SSE), you might have an easier time of it (since now it's just about getting it configured the same way and stopping the compiler breaking it with optimizations). If you try searching for "reproducible floating-point" you might find something helpful.
|
# ? Jun 21, 2012 22:43 |
|
Quick typdef syntax question: the alias to a function pointer is defined in-line as opposed to after the fact like with other data types, correct? So proper syntax in various cases looks like:code:
|
# ? Jun 21, 2012 22:47 |
Paolomania posted:Quick typdef syntax question: the alias to a function pointer is defined in-line as opposed to after the fact like with other data types, correct? So proper syntax in various cases looks like: Yes. Typedef syntax is basically typedef <declaration-of-one-identifier>, i.e. "take a variable declaration of the type you want, and put typedef in front."
|
|
# ? Jun 21, 2012 22:51 |
|
Jabor posted:A lot of the variability in floating-point results happens because of differences in hardware. So chances are any truly portable implementation is entirely a software implementation, which means some pretty poor performance. Yeah, I mean I can probably assume x86 processor. But I tried for like a week without success to get reproducible results on just MSVC/windows and g++/linux even on the same identical machine! I got very close but eventually the behaviour always diverges. It's very unstable and the web is full of conflicting information about what compiler settings will work, whether it's even possible, and so on. I just want to use something implemented with integer types and know that it's gonna work. On speed: my game is really very very simple. Software implementation should definitely be fast enough! The problem with the MPFR library is when I replaced all the floats with 32-bit mpfr types, I went from being able to calculate and render tens of thousands of frames per second to less than 10 fps... Even if I only use mpfr types for the engine code, and not rendering (which doesn't need to be reproducible) it slows down noticeably in some of the more CPU-heavy bits. I believe this is because MPFR has a lot of overhead to allow for operations on types of arbitrary precision (including all the data being allocated on the heap internally...). And it's a C library so no templates and nothing gets optimised away, even though I only ever use very low-precision. I'm sure something with a fixed-precision number type (especially fixed-point) would be more than fast enough. I just can't find any good implementations which is very surprising to me. seiken fucked around with this message at 00:00 on Jun 22, 2012 |
# ? Jun 21, 2012 23:06 |
|
seiken posted:Yeah, I mean I can probably assume x86 processor. But I tried for like a week without success to get reproducible results on just MSVC/windows and g++/linux even on the same identical machine! I got very close but eventually the behaviour always diverges. It's very unstable and the web is full of conflicting information about what compiler settings will work, whether it's even possible, and so on. I just want to use something implemented with integer types and know that it's gonna work. I saw this a couple of months back. It doesn't help you much but gives you some insight into alot of the crap with floating point:
|
# ? Jun 22, 2012 02:35 |
|
The basic principle is to turn on strict floats and disable all optimizations, because it's the optimizations that cause differing behaviour between compilers. Something like this might help, though.
|
# ? Jun 22, 2012 02:45 |
|
Yeah, the basic problem is that in many cases a+b != b+a for floating point numbers, so anytime your compiler changes the operation order, it may change the result.
Brain Candy fucked around with this message at 03:20 on Jun 22, 2012 |
# ? Jun 22, 2012 03:17 |
|
Brain Candy posted:Yeah, the basic problem is that in many cases a+b != b+a for floating point numbers, When is that the case? Edit: Other than NaN. shrughes fucked around with this message at 04:37 on Jun 22, 2012 |
# ? Jun 22, 2012 04:27 |
|
Brain Candy posted:Yeah, the basic problem is that in many cases a+b != b+a for floating point numbers, so anytime your compiler changes the operation order, it may change the result. I thought floating point addition was commutative?
|
# ? Jun 22, 2012 04:28 |
|
There's a pretty big collection of resources on floating point determinism here. TL;DR: It's possible but hard.
|
# ? Jun 22, 2012 05:06 |
|
shrughes posted:When is that the case? Never. My edit from associative to commutative was ah... not good. Brain Candy fucked around with this message at 05:46 on Jun 22, 2012 |
# ? Jun 22, 2012 05:28 |
|
Thanks for all the replies. I think I didn't make it clear enough that I've already screwed around a lot with FPU floating-point determinism and am just looking for something in software. It just doesn't seem sensible to me to rely on something so unguaranteed that could break for any number of reasons. In the end I think I'm just gonna implement my own fixed-point math because it's really simple and more than accurate enough.
seiken fucked around with this message at 12:46 on Jun 22, 2012 |
# ? Jun 22, 2012 12:43 |
|
There is one case in which floating-point addition is not commutative, which is that -0 + +0 != +0 + -0. Many compilers willfully ignore this because commutativity is too precious to lose (because of how ISAs generally work).
|
# ? Jun 22, 2012 19:01 |
|
I've got a C++ program that's hitting the 4GB process memory limit on our Sun system and crashing. This is a known issue that I can't really do much about (yet), but I want to at least log that it happened for later. (Runs are un-monitored, so we never see the "Abort (core dumped)" on the non-existent console.) I was under the impression that running out of memory would cause a std::bad_alloc to be thrown, but instead the program seems to be raising SIGABRT, which means my logging code of course never gets reached. Would it be of any use for me to set up a handler for SIGABRT and try to log there, or is it completely too late to do anything by that point? For that matter, why is it raising SIGABRT instead of throwing the exception? (Is it trying to make the exception and failing because it ran out of memory? ) (edit) Just to make sure I was clear I don't intend to try to recover, I just want to log what happened and let it continue to croak. Ciaphas fucked around with this message at 19:53 on Jun 22, 2012 |
# ? Jun 22, 2012 19:44 |
|
rjmccall posted:There is one case in which floating-point addition is not commutative, which is that -0 + +0 != +0 + -0. Many compilers willfully ignore this because commutativity is too precious to lose (because of how ISAs generally work). Compilers or CPUs? I tested that before writing my previous reply and got +0 in both cases. I didn't look at the assembly but based on the way the code was written i doubt it was generated in such a way that it wasn't the CPU itself producing this behavior. Is the source for your statement something other than Wikipedia? That's my only source, which implies noncommutative addition in that case.
|
# ? Jun 22, 2012 20:07 |
|
Ciaphas posted:I've got a C++ program that's hitting the 4GB process memory limit on our Sun system and crashing. This is a known issue that I can't really do much about (yet), but I want to at least log that it happened for later. (Runs are un-monitored, so we never see the "Abort (core dumped)" on the non-existent console.) Uncaught exceptions eventually result in SIGABRT, so the obvious explanation is that it is getting thrown.
|
# ? Jun 22, 2012 20:49 |
|
If I have the following setup:code:
|
# ? Jun 22, 2012 21:00 |
|
Ciaphas posted:I've got a C++ program that's hitting the 4GB process memory limit on our Sun system and crashing. This is a known issue that I can't really do much about (yet), but I want to at least log that it happened for later. (Runs are un-monitored, so we never see the "Abort (core dumped)" on the non-existent console.) I would start by looking for a double-throw (i.e. an exception that might be thrown while unwinding the stack because of the std::bad_alloc). I believe that there's some requirement on C++ implementations to prevent the "runs out of memory while dealing with a std::bad_alloc" situation but that's literally just something I kind of remember reading about somewhere.
|
# ? Jun 22, 2012 21:01 |
|
shrughes posted:Compilers or CPUs? I tested that before writing my previous reply and got +0 in both cases. I didn't look at the assembly but based on the way the code was written i doubt it was generated in such a way that it wasn't the CPU itself producing this behavior. Is the source for your statement something other than Wikipedia? That's my only source, which implies noncommutative addition in that case. Actually, I was wrong; I went and checked IEEE, and the relevant wording is: IEEE 758-2008 posted:When the sum of two operands with opposite signs (or the difference of two operands with like signs) is exactly zero, the sign of that sum (or difference) shall be +0 in all rounding-direction attributes except roundTowardNegative; under that attribute, the sign of an exact zero sum (or difference) shall be −0. However, x + x = x − (−x) retains the same sign as x even when x is zero.
|
# ? Jun 22, 2012 21:01 |
|
seiken posted:Yeah, I mean I can probably assume x86 processor. But I tried for like a week without success to get reproducible results on just MSVC/windows and g++/linux even on the same identical machine! I got very close but eventually the behaviour always diverges. It's very unstable and the web is full of conflicting information about what compiler settings will work, whether it's even possible, and so on. I just want to use something implemented with integer types and know that it's gonna work. You could try CORE (which I believe actually typedefs double if you let it) or LEDA which is notable for being apparently the only library of its type that isn't ultimately built on top of GMP.
|
# ? Jun 22, 2012 21:04 |
|
Plorkyeran posted:Uncaught exceptions eventually result in SIGABRT, so the obvious explanation is that it is getting thrown. I should have mentioned I put all sorts of catches in main, including catch(...), and it never seems to make it that far. GrumpyDoctor posted:I would start by looking for a double-throw (i.e. an exception that might be thrown while unwinding the stack because of the std::bad_alloc). I believe that there's some requirement on C++ implementations to prevent the "runs out of memory while dealing with a std::bad_alloc" situation but that's literally just something I kind of remember reading about somewhere. Thanks, I'll try this. Does any code execute besides destructors for stack objects that I need to look out for?
|
# ? Jun 22, 2012 21:19 |
|
Ciaphas posted:I should have mentioned I put all sorts of catches in main, including catch(...), and it never seems to make it that far. If you can run the program in a debugger, just set it to break on exception throws and allocate a huge chunk of memory up front to force an allocation failure shortly thereafter.
|
# ? Jun 22, 2012 21:38 |
IratelyBlank posted:If I have the following setup: Your addArrays function is taking a pointer type, not an array type. Only the pointer is getting passed, not the pointed-to data.
|
|
# ? Jun 22, 2012 21:40 |
|
I've rigged up a Boost.MultiIndex container and I need it to have uniqueness on a composite key. I don't need to actually retrieve records via this composite key, so is it possible to set up the container this way? (I figure there's a good chance that leaving out that particular retrieval capability doesn't actually gain me anything.)
|
# ? Jun 23, 2012 01:00 |
|
GrumpyDoctor posted:You could try CORE (which I believe actually typedefs double if you let it) or LEDA which is notable for being apparently the only library of its type that isn't ultimately built on top of GMP. Hi, really appreciate your input (and those links are interesting even if I don't use them) but I managed to implement fixed-point from scratch already. It works great and seems to be almost as fast as using floats even with my very naive first-attempt implementations of the transcendental functions. seiken fucked around with this message at 18:17 on Jun 23, 2012 |
# ? Jun 23, 2012 01:42 |
|
I'm glad you found a working solution I think the fact that you only care about reproducibility actually sets you apart from most of the use cases existing libraries are designed for, so it's not terribly surprising that a) you couldn't easily find something off-the-shelf, and b) you managed to roll your own solution in not much time. (I've spent two years dicking around with exact numeric representations so I kind of jump at the chance to talk about this kind of thing.)
raminasi fucked around with this message at 05:37 on Jun 23, 2012 |
# ? Jun 23, 2012 05:35 |
|
Makes sense. Definitely most of the things I came across that weren't someone else's 15-minute fixedpoint implementation were aimed squarely at science/academic stuff (including those two you linked). I have sin(pi/2)=1.004 and I'm perfectly happy with that! On the other hand I believe fast inaccurate but reproducible real-number math should be a fairly common use-case for games... RTS networking, game replays, etc can be done easily just by recording user inputs with that. Anyway no worries, I shouldn't have been so hesistant to do it myself because it was fairly easy and now I know way more about bits.
seiken fucked around with this message at 16:11 on Jun 24, 2012 |
# ? Jun 23, 2012 18:43 |
|
seiken posted:Makes sense. Definitely most of the things I came across that weren't someone else's 15-minute fixedpoint implementation were aimed squarely at science/academic stuff (including those two you linked). I have sin(pi)=1.004 and I'm perfectly happy with that! On the other hand I believe fast inaccurate but reproducible real-number math should be a fairly common use-case for games... RTS networking, game replays, etc can be done easily just by recording user inputs with that. Anyway no worries, I shouldn't have been so hesistant to do it myself because it was fairly easy and now I know way more about bits. Speaking of which, I didn't see it mentioned but I assume you've seen Bruce Dawson's posts on this? In eight parts no less! http://www.altdevblogaday.com/2012/04/20/exceptional-floating-point/
|
# ? Jun 24, 2012 05:49 |
|
I haven't programmed much C but I've used a shitload of Java. I have to learn to do C on a Linux machine, and I haven't used that either. Can someone explain a fast way to set up and use this on Ubuntu?
|
# ? Jun 24, 2012 18:25 |
|
Boz0r posted:I haven't programmed much C but I've used a shitload of Java. I have to learn to do C on a Linux machine, and I haven't used that either. Can someone explain a fast way to set up and use this on Ubuntu? sudo apt-get install build-essential (I don't remember off the top of my head if it is essential or essentials)
|
# ? Jun 24, 2012 18:37 |
|
Thanks, that seemed to work. How do I use it, now?
|
# ? Jun 24, 2012 18:50 |
|
Easiest way is to write your code using whatever text editor you like (default is probably half-decent), save it, open a terminal, and type something like gcc path/to/example.c It's as easy as that! Then to run it type ./a.out which is what it'll name the output executable by default.
|
# ? Jun 24, 2012 18:53 |
|
Note that as soon as you start wanting to use libraries, you want to use pkg-config rather than manually finding the flags. And you also probably want to use -Wall and -Werror. It's going to look like: code:
|
# ? Jun 24, 2012 19:04 |
|
Cool, and gedit has built-in highlighting. Thanks
|
# ? Jun 24, 2012 19:12 |
|
|
# ? Jun 7, 2024 16:23 |
|
I find myself spinning a lot on calls to stuff that returns a bool to continue running. I figured it was about time I did something about them because the poll overhead is now sufficiently obnoxious. I could callback to signal the end of the operation but I was hoping not to have to write and implement callbacks every time. I can't think of a way to do it better with, say, Boost. I am using its threading stuff right now, so I do use condition variables in places, and I use the predicate function in there sometimes. Does anybody know anything clever that I might try here? I can only think of stuff using Tasklets or similar, but it wasn't clear to me they put anything like that in the library.
|
# ? Jun 24, 2012 21:26 |