|
Dylan16807 posted:Not all floating point calculations. Only ones that involve fractional numbers. Integer math works just as well as it always does. code:
toiletbrush fucked around with this message at 15:55 on Sep 14, 2016 |
# ? Sep 14, 2016 15:52 |
|
|
# ? Jun 1, 2024 02:54 |
|
Dylan16807 posted:When has moving to another architecture broken floating point code? Almost everything is IEEE standard, and good luck porting to a weird DSP without already having to rewrite your code. FDIV doesn't count; particular processor models have broken instructions all the time. From that series of articles above (https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/) - basically the IEE standards specify how to store the final results, but not how to store intermediates, so precision errors are often different.
|
# ? Sep 14, 2016 16:03 |
|
toiletbrush posted:No it doesn't... What size is int here? If it's 32 bits then you're just showing off that 32 bits of precision is better than 24. If int is 24 bits here, then you found a flaw in my argument, that ints can have better behavior when you're overflowing them. Of course, ints can also cause much worse behavior when overflowing. Especially in C, where that's undefined behavior. Still, most code doesn't need wrapping overflow semantics, and can do integer math in floating point formats just fine.
|
# ? Sep 14, 2016 16:30 |
|
robostac posted:From that series of articles above (https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/) - basically the IEE standards specify how to store the final results, but not how to store intermediates, so precision errors are often different. The handling of intermediates can also vary by things like stack layout and compiler flags, especially on x86. I would expect any code written carefully enough to always get the same results on one platform to survive a transition to other platforms with minimal effort. I could be wrong on that, but this is the first time I've heard of portability concerns as a reason to be wary of IEEE floating point.
|
# ? Sep 14, 2016 16:43 |
|
code:
|
# ? Sep 14, 2016 16:45 |
|
I mean, usually if you're really, really concerned about precision you either normalize your range of numbers to lower values where float works a little more reasonably, or you switch to some other rational/symbolic/scaled-integer-based representation.
|
# ? Sep 14, 2016 16:47 |
|
Dylan16807 posted:When has moving to another architecture broken floating point code? https://software.intel.com/en-us/forums/intel-c-compiler/topic/518464
|
# ? Sep 14, 2016 16:50 |
|
fritz posted:Seventh google link: That's code that gives different results on the same processor based on an environment variable. I'd say it's the opposite of code that's non-broken on one architecture and breaks when you port it to another. Also they're both x86_64, not exactly what I meant when talking about architectures in reply to porting to ARM. But it is a good example of how it's often hard to use (non-integer) floating point numbers correctly in the first place. Not that any alternative is easy to use either.
|
# ? Sep 14, 2016 17:12 |
|
Dylan16807 posted:What size is int here? If it's 32 bits then you're just showing off that 32 bits of precision is better than 24.
|
# ? Sep 14, 2016 17:22 |
|
toiletbrush posted:I'm not talking about overflow semantics. I'm saying 'Integer math works just as well as it always does' is untrue, because the effects of fp rounding are way more subtle and complicated than integer overflow - even simple addition/subtraction of integers in fp falls apart surprisingly quickly. Those are overflow semantics. If you don't overflow the 24 or 53 bits you get in a float or double, it won't round. Integer overflow is not simple either. Do it in C and the program can make up whatever number it wants, or return halfway through the function, or crash. Most of the time, you should not be overflowing. It will break no matter whether it's int or float. It will just break differently.
|
# ? Sep 14, 2016 17:30 |
|
Dylan16807 posted:Those are overflow semantics. If you don't overflow the 24 or 53 bits you get in a float or double, it won't round.
|
# ? Sep 14, 2016 17:41 |
|
hackbunny posted:To write a program with an infinite loop, do you need source code of infinite length? 0.6̅ is hardly infinite, and there is a simple formula to go from that form to the fraction and back. Revised quote because apparently this is nitpicker's corner in the comedy coding horrors forum: Every fraction that has a denominator that doesn't have roots 2 or 5 is unrepresentable in decimal notation, with a finite number of significands, and without repeating decimal notation. My point is that 1/5 being unrepresentable in finite binary notation is the same as 1/3 being unrepresentable in finite decimal notation. Floating point is basically "scientific notation" with a finite number of significant figures. 53, in fact. It's powers of two instead of powers of ten, but, yes, all integers in that range are representable, and the fact that people don't understand that mean that we're not teaching floating points right.
|
# ? Sep 14, 2016 17:54 |
|
antpocas posted:
EndsWith is too hard, man.
|
# ? Sep 14, 2016 18:07 |
|
Suspicious Dish posted:Would you say that everyday decimal numbers are fuzzy, approximate things? Every fraction that has a denominator that doesn't have roots 2 or 5 is unrepresentable in decimal. I used the word "heuristic" twice, what more do you want? And if you've ever done math involving multiple steps with a calculator and writing down intermediate values, then yes the result can be fuzzy. I've seen students using the same numbers and formulas get different results due to the order they perform the steps in, the number of operations they combine, the point at which they truncate intermediate results, etc. In fact the software we used to administer assignments didn't insist on exact results for this reason, so yes treating decimal numbers as fuzzy and approximate is a useful heuristic just as it is with floats.
|
# ? Sep 14, 2016 19:04 |
|
ulmont posted:Pretty sure both have been posted here before, but helpful links regarding floating point: Article from same stating that floating point ain't magic and sometimes you really do want an equality comparison (as I think was the case in the function that led to this discussion): https://randomascii.wordpress.com/2012/06/26/doubles-are-not-floats-so-dont-compare-them/
|
# ? Sep 14, 2016 19:13 |
|
You know what started this is testing a variable called paramDblValue against a float literal and not a double.
|
# ? Sep 14, 2016 19:16 |
|
Dylan16807 posted:That's code that gives different results on the same processor based on an environment variable. I'd say it's the opposite of code that's non-broken on one architecture and breaks when you port it to another. That's not my reading? quote:
|
# ? Sep 14, 2016 19:45 |
|
Tank Boy Ken posted:double d2(0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1); // should equal 1.0 It's not a terribly surprising output when you consider the compiler told the computer to do this: code:
|
# ? Sep 14, 2016 19:57 |
|
fritz posted:That's not my reading? First note that "different" architectures are both x86_64. Then read the post talking about MKL_CBWR. It's not the CPU causing problems here, it's the library switching to a different mode.
|
# ? Sep 14, 2016 20:17 |
|
hobbesmaster posted:You know what started this is testing a variable called paramDblValue against a float literal and not a double. Yeah but comparing against 1
|
# ? Sep 14, 2016 20:29 |
ExcessBLarg! posted:Honestly trying to remember the last time I wanted to know when some floating point value == 1.0 exactly. I can't think of one. Maybe you are dividing by the logarithm of the number?
|
|
# ? Sep 14, 2016 21:31 |
|
VikingofRock posted:Maybe you are dividing by the logarithm of the number? That certainly sounds like a situation where you'd want to use a tolerance.
|
# ? Sep 14, 2016 21:59 |
Dr. Stab posted:That certainly sounds like a situation where you'd want to use a tolerance. Maybe you have a case where large numbers are acceptable, but Infinite numbers are not?
|
|
# ? Sep 14, 2016 22:20 |
|
Dylan16807 posted:Integer overflow is not simple either. Do it in C and the program can make up whatever number it wants, or return halfway through the function, or crash. Only if it's a signed integer
|
# ? Sep 14, 2016 23:28 |
|
Dr. Stab posted:That certainly sounds like a situation where you'd want to use a tolerance. Okay, fabs(x - 1.0) <= pow(2,-54). There's your tolerance.
|
# ? Sep 14, 2016 23:50 |
|
C++ code:
|
# ? Sep 15, 2016 01:13 |
|
Absurd Alhazred posted:
Maybe my addled brain is missing something, but isn't this safe? You ensure foo is non-null before you dereference it. That's what you should do. The real horror is that you forgot that quux comes after baz.
|
# ? Sep 15, 2016 01:20 |
|
It doesn't actually ensure foo is non-null.
|
# ? Sep 15, 2016 01:22 |
|
I've never made it to quux. Though I do have a "biz" into my lovely variable sequence: foo bar biz baz. Googling it it looks like biz doesn't actually exist as part of the pattern and I should have been using quux all these years. Truly a coding horror.
|
# ? Sep 15, 2016 01:28 |
|
Plorkyeran posted:It doesn't actually ensure foo is non-null. Oh, missed the second "=". indeed. I was going to suggest that maybe the compiler should warn you when your code contains a useless statement, but I guess you could have overloaded operator== to have side effects!
|
# ? Sep 15, 2016 01:30 |
|
Absurd Alhazred posted:
Most compilers seem to have this warning off by default (e.g. for vc++ C4555 expression has no effect; expected expression with side-effect). Why is this one off by default but they whine a ton about unused arguments.
|
# ? Sep 15, 2016 01:51 |
|
xzzy posted:I've never made it to quux. Live a little. Use blorax. TooMuchAbstraction posted:Oh, missed the second "=". indeed. It's a longtime annoyance of mine with C and derivatives, that they allow return values to just lie there in code, and that = and == look too similar. This is the first time ever this has happened to me in this direction, though. What usually gets me is if (something = awful).
|
# ? Sep 15, 2016 02:34 |
|
Absurd Alhazred posted:It's a longtime annoyance of mine with C and derivatives, that they allow return values to just lie there in code, printf returns a value.
|
# ? Sep 15, 2016 04:14 |
|
fritz posted:printf returns a value. code:
|
# ? Sep 15, 2016 06:12 |
|
Absurd Alhazred posted:It's a longtime annoyance of mine with C and derivatives, that they allow return values to just lie there in code __attribute__((warn_unused_result))
|
# ? Sep 15, 2016 06:21 |
|
Absurd Alhazred posted:Live a little. Use blorax. At that point I'd prefer tlön uqbar orbis tertius.
|
# ? Sep 15, 2016 10:15 |
|
Absurd Alhazred posted:It's a longtime annoyance of mine with C and derivatives, that they allow return values to just lie there in code By return value do you mean expression? Because that makes perfect sense
|
# ? Sep 15, 2016 15:20 |
|
XML code:
The places where .Net just becomes agnostic to things not existing are always surprising in the dumbest loving ways.
|
# ? Sep 15, 2016 16:08 |
|
Absurd Alhazred posted:It's a longtime annoyance of mine with C and derivatives, that they allow return values to just lie there in code, and that = and == look too similar. This is the first time ever this has happened to me in this direction, though. What usually gets me is if (something = awful). Which language doesn't allow you to write code:
|
# ? Sep 15, 2016 16:55 |
|
|
# ? Jun 1, 2024 02:54 |
|
Munkeymon posted:
|
# ? Sep 15, 2016 17:22 |