|
I can only seem to google up advice on how to make cout round a double. I actually want to stop it from rounding my doubles. Whenever I try to:code:
|
# ? Dec 29, 2009 23:56 |
|
|
# ? May 22, 2024 22:11 |
|
http://codepad.org/VKjKmHNl
|
# ? Dec 30, 2009 00:01 |
|
Subway Ninja posted:I can only seem to google up advice on how to make cout round a double. I actually want to stop it from rounding my doubles. Whenever I try to: code:
|
# ? Dec 30, 2009 00:03 |
|
Thanks for the quick reply. I'm sure there's a valid reason that I've yet to learn for this behavior, but as a newbie this seems counter intuitive.
|
# ? Dec 30, 2009 00:04 |
|
Subway Ninja posted:Thanks for the quick reply. I'm sure there's a valid reason that I've yet to learn for this behavior, but as a newbie this seems counter intuitive. What exactly was going wrong with it? Precision was so deep it was rounding up on its own? I may be wrong (AD, back me up) but I think it's implementation-specific as to how deep the ostream precision is by default for floats and doubles, no? On mine (GCC 4.3.2 on Debian 5.0.3 with nothing fancy) it rounds at the fourth place after the decimal point.
|
# ? Dec 30, 2009 00:11 |
|
Not sure. I'm using visual c++ 2008 Express. Every float/double I printed to console was coming out rounded to the nearest whole integer. I even stepped through the program and it saved the correct values to my variables. I didn't change any settings within the studio either. Only been learning c++ for about 2 weeks so I was at a loss as to what I was doing wrong. I'll have to see if there's a setting for this somewhere.
|
# ? Dec 30, 2009 00:17 |
|
I'm not sure of the terminology here, but (for example) the templated function boost::lexical_cast deals with two typenames, Source and Target, but infers the Source type from context so you only have to specify Target in the call.code:
code:
|
# ? Dec 30, 2009 12:20 |
|
Just change template<typename SourceType, typename TargetType> to template<typename TargetType, typename SourceType>.
|
# ? Dec 30, 2009 14:18 |
|
Google template argument deduction. In both your function and boost::lexical_cast, the compiler has no way to determine the type of the TargetType, which is why that type should be supplied as a template argument. However the reason Boost's function works and yours doesn't is because of the order of your template arguments. For example, say you tried to call your function like this: float f = fuck_me_cast<float>(s); This says that float is the type of SourceType and the compiler should deduce the type of TargetType. This is most assuredly not what you mean. However if you swap the order of your template arguments you get: code:
|
# ? Dec 30, 2009 14:19 |
|
Lexical Unit posted:Google template argument deduction.
|
# ? Dec 30, 2009 14:27 |
|
I'm not really an expert on the differences between compilers, as for one I've never even used MSVC++. But yes, template argument deduction is part of the C++ standard. It should work as expected, barring a compiler bug/issue.
|
# ? Dec 30, 2009 14:44 |
|
I know this is an old problem. For some reason I'm having trouble finding the relevant libraries. I'm comparing '\0'-terminated strings of unknown length against char arrays of known length, and for performance purposes the comparison function should make the minimal number of comparisons. Specifically, the function shouldn't call strlen on the null-terminated string in order to fall back to an easier method. This problem is trickier than it looks. Neither strncmp nor memcmp solve it directly. Someone must have written analogues of the string.h functions for all of these possibilities.
|
# ? Dec 31, 2009 09:27 |
|
functional posted:Neither strncmp nor memcmp solve it directly. Why not?
|
# ? Dec 31, 2009 09:33 |
|
functional posted:I know this is an old problem. For some reason I'm having trouble finding the relevant libraries. I'm comparing '\0'-terminated strings of unknown length against char arrays of known length, and for performance purposes the comparison function should make the minimal number of comparisons. Specifically, the function shouldn't call strlen on the null-terminated string in order to fall back to an easier method. This problem is trickier than it looks. Neither strncmp nor memcmp solve it directly. I assume the problem is that strncmp says that the strings are equal if the fixed-length array is a prefix of the longer string? If that's so, the obvious answer would be to just arrange to nul-terminate the fixed-length array, and use strcmp.
|
# ? Dec 31, 2009 09:43 |
|
ShoulderDaemon posted:I assume the problem is that strncmp says that the strings are equal if the fixed-length array is a prefix of the longer string? If that's so, the obvious answer would be to just arrange to nul-terminate the fixed-length array, and use strcmp. I'd hope that if that were the only issue (and changing the char array was forbidden for some reason), it would be obvious that it's trivial to just roll your own strncmp-ish function. I mean, it's not like glibc's implementation of strncmp is all that complex, especially when you eliminate the manual loop-unrolling. EDIT: and if you want to compare word-sized blocks of chars at a time, that's easy too. (But basically intractable if your strings are differently-aligned.) Avenging Dentist fucked around with this message at 10:02 on Dec 31, 2009 |
# ? Dec 31, 2009 09:50 |
|
Avenging Dentist posted:I'd hope that if that were the only issue (and changing the char array was forbidden for some reason) ... The problem is more finding a library that's solved all of these issues already. Handling the difference between 0-terminated strings and char arrays is an old and well-known problem. Obviously anyone could write these functions themselves over a short period, but that misses two points: rewriting a library is not the best use of your time, and you'll miss a lot of optimizations the library writers didn't. There's no way I would've included the loop-unrolling found in glibc's strncmp.
|
# ? Dec 31, 2009 20:36 |
|
functional posted:There's no way I would've included the loop-unrolling found in glibc's strncmp. And why should you? GCC's optimizer does that for you (as do most others). You are, like most people who worry about stuff like this, trying to outsmart hundreds of people who have made efficiency their job for a good decade. Also you still haven't explained why strncmp is unsuitable.
|
# ? Dec 31, 2009 20:41 |
|
Avenging Dentist posted:Also you still haven't explained why strncmp is unsuitable. I'm guessing it's because the length argument to strncmp has to be the length of the fixed-sized array (since it's not null-terminated). But then that won't work if the variable-length string is longer (which you don't know without scanning it beforehand). I'd probably write a naive loop and see how that performs before going any further. Compiler optimizations may make this problem moot. Honestly, I'm a bit skeptical as to how important speeding this up is (in the original context). If it's really a giant bottleneck, surely it'd be worth the initial cost to copy the fixed-sized strings and null-terminate them (or maybe do so lazily).
|
# ? Dec 31, 2009 22:40 |
|
It only wouldn't work if the array based string contained \0 in a nonterminal position. Just write your own string comparison routine. Like AD says, it's not hard. If that's too much trouble take the length of the null terminated string and stop whining. You probably don't need the extra speed anyway.
|
# ? Dec 31, 2009 23:54 |
|
Lonely Wolf posted:You probably don't need the extra speed anyway. It's worth noting that if you take the strlen of a C string before doing some other operation on that same string, as long as the string fits into cache the strlen operation is highly likely to be literally free as very few programs nowadays are actually CPU-limited rather than memory-bandwidth limited. You can easily perform thousands of operations in the time it takes to fetch single cachelines; hell, if the strlen manages to hit the predictive fetcher in your CPU in a more favorable manner than whatever algorithm you run after it, you may even save time as a result of parallel and out-of-order execution schedules. Or, more simply, optimizing without profiling is really really dumb.
|
# ? Jan 1, 2010 00:06 |
|
floWenoL posted:But then that won't work if the variable-length string is longer (which you don't know without scanning it beforehand). That all depends on what specifically he means by "comparing" the strings, which he hasn't qualified.
|
# ? Jan 1, 2010 00:06 |
|
Avenging Dentist posted:That all depends on what specifically he means by "comparing" the strings, which he hasn't qualified. I assume since he mentioned strncmp and memcmp he means lexicographically. However, unless his strings include embedded nulls, strncmp plus some extra logic if strncmp returns 0 would suffice, I think.
|
# ? Jan 1, 2010 00:49 |
|
Avenging Dentist posted:I'd hope that if that were the only issue (and changing the char array was forbidden for some reason), it would be obvious that it's trivial to just roll your own strncmp-ish function. I mean, it's not like glibc's implementation of strncmp is all that complex, especially when you eliminate the manual loop-unrolling. Can you elaborate more on the loop unrolling and use the function you linked to in your explanation?
|
# ? Jan 1, 2010 01:44 |
|
RussianManiac posted:Can you elaborate more on the loop unrolling and use the function you linked to in your explanation? Sure.
|
# ? Jan 1, 2010 02:53 |
|
Avenging Dentist posted:Sure. Yea I could do that, but I really was looking at your expert words of wisdom on this subject instead
|
# ? Jan 1, 2010 02:59 |
|
RussianManiac posted:Can you elaborate more on the loop unrolling and use the function you linked to in your explanation? code:
Blotto Skorzany fucked around with this message at 23:05 on Jan 1, 2010 |
# ? Jan 1, 2010 23:02 |
|
Even with a lovely old compiler, how much of a win is unrolling if there's still a conditional branch per character?
|
# ? Jan 1, 2010 23:54 |
|
Fecotourist posted:Even with a lovely old compiler, how much of a win is unrolling if there's still a conditional branch per character? It's still a net reduction in comparisons. Though in this case I question how much the win is since in this loop you're almost always continuing in the loop so the only time you'll blow your pipeline is on the finished loop through.
|
# ? Jan 2, 2010 00:03 |
|
Oh ok, I get it. The point is to execute more in the body of a new optimized loop so that the number of comparisons and jumps for the loop condition is minimized as compared to a simple unoptimized loop?
|
# ? Jan 2, 2010 00:13 |
|
More a general programming question that a specific C++ question. I need to store a product key in a program. I realize that this is an incredibly stupid idea in general, and isn't really my choice. For at least some security, I want to store it so that at least under casual inspection people don't realize that there is something interesting stored in the binary, i.e. strings won't spit it out (or something else equally interesting like the key used to encrypt the actual product key) and under a hex editor it won't stick out (the product key has many english words in it, including the company that makes the product and the owner of the license key.) I can think of a lot of ways to do this, but I was wondering if there were any well-known techniques. I guess what I'm really scared of is that I come up with something "clever" involving lots of bitshifts and such that looks kosher but ends up being optimized away in the future by an even more clever compiler.
|
# ? Jan 4, 2010 06:46 |
|
Xoring every byte in the string with 0x80 is good enough to defeat casual inspection.
|
# ? Jan 4, 2010 06:55 |
|
rjmccall posted:Xoring every byte in the string with 0x80 is good enough to defeat casual inspection. why 80?
|
# ? Jan 4, 2010 07:30 |
|
RussianManiac posted:why 80? Setting the high order bit makes it not show up when you run strings against the program.
|
# ? Jan 4, 2010 07:50 |
|
I thought what rjmccall suggested sounded like something easily optimized away, but I tried the following in the Visual C++ 9 compiler with optimization on:code:
I guess it's time to learn the basics of x86 and see what it really did.
|
# ? Jan 4, 2010 09:21 |
|
ShoulderDaemon posted:Setting the high order bit makes it not show up when you run strings against the program. What exactly do you mean by "run strings against the program?" Do you mean that the ascii characters look like extended ascii instead of regular stuff because its higher than 127?
|
# ? Jan 4, 2010 10:50 |
|
RussianManiac posted:What exactly do you mean by "run strings against the program?" code:
|
# ? Jan 4, 2010 11:03 |
|
Chuu posted:I thought what rjmccall suggested sounded like something easily optimized away, but I tried the following in the Visual C++ 9 compiler with optimization on: pre:int main(int argc, char** argv) { char* charArray = static_cast<char*>(malloc(sizeof(char)*6)); 01281000 6A 06 push 6 01281002 FF 15 C4 20 28 01 call dword ptr [__imp__malloc (12820C4h)] charArray[0] = 'h'; charArray[1] = 'e'; charArray[2] = 'l'; charArray[3] = 'l'; charArray[4] = 'o'; charArray[5] = '\0'; doSomething(charArray); 01281008 8B 0D 44 20 28 01 mov ecx,dword ptr [__imp_std::endl (1282044h)] 0128100E 83 C4 04 add esp,4 01281011 51 push ecx 01281012 C7 00 68 65 6C 6C mov dword ptr [eax],6C6C6568h 01281018 66 C7 40 04 6F 00 mov word ptr [eax+4],6Fh 0128101E 8B 15 5C 20 28 01 mov edx,dword ptr [__imp_std::cout (128205Ch)] 01281024 50 push eax 01281025 52 push edx 01281026 E8 A5 00 00 00 call std::operator<<<std::char_traits<char> > (12810D0h) 0128102B 83 C4 08 add esp,8 0128102E 8B C8 mov ecx,eax 01281030 FF 15 4C 20 28 01 call dword ptr [__imp_std::basic_ostream<char,std::char_traits<char> >::operator<< (128204Ch)] return EXIT_SUCCESS; 01281036 33 C0 xor eax,eax } 01281038 C3 ret This is what I got out of VS2010 on Release. The malloc return was placed in EAX, doSomething was inlined, and hello\0 was encoded into two instructions (see the bolded parts). I got this output be showing disassembly during debugging, and then turning on "Code bytes".
|
# ? Jan 4, 2010 17:01 |
|
I really meant something like this:code:
(†) Actually, the Cray compiler is insane, so who knows. EXTREMELY LATE EDIT: I had accidentally typoed the last ^ as a & rjmccall fucked around with this message at 01:15 on Jan 5, 2010 |
# ? Jan 4, 2010 19:31 |
|
Chuu posted:I thought what rjmccall suggested sounded like something easily optimized away, but I tried the following in the Visual C++ 9 compiler with optimization on: Turning a dynamic memory allocation into a static string is definitely beyond the scope of compiler optimizations. You might have better luck if you allocate charArray on the stack, e.g.: code:
|
# ? Jan 5, 2010 00:57 |
|
|
# ? May 22, 2024 22:11 |
|
Thanks for the replies. I guess besides the comp-sci 101 examples (tail recursion, loop unrolling, inlining, copy prevention) I don't have a good sense of what's easily optimized and what isn't. Is there some good reference out there that explains modern compiler optimizing techniques?
|
# ? Jan 5, 2010 02:30 |