|
jit bull transpile posted:I disagree. For me as a trans woman, being called "she" is extremely affirming and being referred to by gender neutral language feels like a microaggresion denying my identify. I totally agree with this in the real world today where gendered pronouns are standard, and I am certainly not advocating for erasing your identity as a woman. For context though I’m nb and very uncomfortable in a world that forces me to either pretend to have a binary gender, or ask everyone I meet to make a special exception for me in their grammar.
|
# ? Oct 15, 2019 09:27 |
|
|
# ? Jun 8, 2024 10:18 |
|
tankadillo posted:I’ve only been on one Discourse forum and it seemed alright but the gamification stuff just feels unnecessary and infantilizing. “Congratulations!!! You just replied to a topic!!!” “Way to go, you just viewed a thread!!!!" Achievement unlocked: get permabanned
|
# ? Oct 15, 2019 09:30 |
|
Carbon dioxide posted:I don't want to discredit anything you're saying here, I just have a completely unrelated question if you don't mind: what programming language is the text under your avatar? jbt is YOSPOS's favorite reformed MUMPS developer.
|
# ? Oct 15, 2019 14:36 |
|
CPColin posted:jbt is YOSPOS's favorite reformed MUMPS developer. Oh, I've heard of MUMPS
|
# ? Oct 15, 2019 14:58 |
|
Soricidus posted:Achievement unlocked: get permabanned We got a leaderboard for perma speedruns yet?
|
# ? Oct 15, 2019 18:21 |
|
Space Whale posted:We got a leaderboard for perma speedruns yet? I think the closest we have is a dude who got off a 100k hour probation and within 48 hours ate a month and a ban.
|
# ? Oct 15, 2019 18:35 |
|
Carbon dioxide posted:I don't want to discredit anything you're saying here, I just have a completely unrelated question if you don't mind: what programming language is the text under your avatar? Mumps Volguus posted:Oh, I've heard of MUMPS That post makes me cranky because it's really just talking about bad code and doesn't get into any of the best things about mumps. You can look at my post history in the yospos terrible programmers and pl threads if you want a really good primer on the language
|
# ? Oct 16, 2019 00:56 |
|
I was doing a code review the other day, which included this (variables renamed slightly for context)code:
I was -ing at this when the change was updated: code:
"Yes, I think it's ok in this case." Dude is a contractor. chippy fucked around with this message at 14:47 on Oct 16, 2019 |
# ? Oct 16, 2019 14:44 |
|
chippy posted:I was doing a code review the other day, which included this (variables renamed slightly for context) Probably drilled into his head at some point to never use equality with a float/double and the rest is automatic.
|
# ? Oct 16, 2019 14:49 |
|
Hughlander posted:Probably drilled into his head at some point to never use equality with a float/double and the rest is automatic. Interesting, maybe I need to read up on this myself.
|
# ? Oct 16, 2019 15:27 |
|
What every computer scientist should know about floating point arithmetic
|
# ? Oct 16, 2019 15:35 |
|
I am at the point where I think an IEEE-754 quiz should pop up when you try to compile code that compares floats. If you fail, it is a compilation error.
|
# ? Oct 16, 2019 15:37 |
|
yeah, abs(b-a) <= VERY_SMALL_DELTA is in pretty much every codebase I've ever worked in at some point, it's pretty useful. Just checked the one I'm on now and yep
|
# ? Oct 16, 2019 15:38 |
|
It is also wrong
|
# ? Oct 16, 2019 16:02 |
|
In the code as presented, == is entirely sufficient. What's the fail case? You end up stringifying a double that's almost but not quite the same as the one that's currently being displayed? When you start getting in trouble is if you want to compute something in two different ways, but still recognize it as the same result.
|
# ? Oct 16, 2019 16:08 |
|
In fairness, absolutely no method of float comparison is (uniformly) right :v. But yeah, that one is especially wrong (but, IIRC, only worse than "relative equality" when the values are very small). It's main issue is really the stickiness in choosing an epsilon. When using relative equality in unit tests before I've often gotten false negatives and needed to fudge the epsilon after hand-verifying the solution is correct. E: And yes, there are cases where absolute float == equality is 100% what you want. For instance, a function that should spit back out the exact parameter it's given, or a function that should spit out a specific number under certain error conditions (e.g. a function that "saturates" above/below a specific value should return the saturated value exactly). V Yeah, and it's not always the right tool Linear Zoetrope fucked around with this message at 16:24 on Oct 16, 2019 |
# ? Oct 16, 2019 16:10 |
|
I don't want to make an effort post from mobile phone, but yeah, there are basically 4 different float comparisons, exact, absolute margin, relative margin and ULPs, and you need to pick which one fits your use case I can also tell you that I'd be surprised if 1 in 100 developers who use floating point numbers understood which comparison to use when, judging by the issue tracker of my oss project (a testing framework).
|
# ? Oct 16, 2019 16:17 |
|
Well I learned some stuff.Jabor posted:In the code as presented, == is entirely sufficient. What's the fail case? You end up stringifying a double that's almost but not quite the same as the one that's currently being displayed? Well yeah. I probably wouldn't even bother testing the value, it's just updating a text box with it and it's not something that's happening particularly often. chippy fucked around with this message at 16:33 on Oct 16, 2019 |
# ? Oct 16, 2019 16:29 |
|
Hehe I learned some stuff too and nobody knows I didn't know it
|
# ? Oct 16, 2019 17:08 |
|
jit bull transpile posted:Mumps itsatrap dot gif
|
# ? Oct 16, 2019 17:48 |
|
In my experience, we've successfully mistrained everyone on floating points to the same degree as "secure passwords", so half the programmers I work with think floating point is just this quantum random unpredictable nightmare you can never compare. Most times you don't want equality, you just want to know if two values are in close proximity, with a much greater tolerance than any Epsilon or ULP, so something like abs(x-y) < 0.001 is fine. Or you just want to check if a value you set is still in there, for which equality is fine. If "x = y" then "x == y" is always true as long as they go through the same operations. It's only when you're comparing floats that have gone through different inputs or operations that you need to bring out the "what every programmer should know about floats", and you'll still be just as confused and just use some good-enough mix of relative and absolute comparisons.
|
# ? Oct 16, 2019 22:18 |
|
SupSuper posted:with a much greater tolerance than any Epsilon or ULP, so something like abs(x-y) < 0.001 is fine.
|
# ? Oct 17, 2019 10:49 |
|
Well, our VPN is still broken so here comes an effort post on floating point comparisons. As far as I am aware, there are four different basic ways to compare floating point numbers.
Exact comparison is what happens when you write lhs == rhs. This works if you know the exact value you should get (e.g. you clamp inputs to [0., 1.]), in which case it avoids falsely accepting wrong close-but-wrong inputs. The other main case where you get an exactly the same number is if you take the same inputs and place them through the same computation, e.g. (some-number + some_constant1) * some_constant2 - some_constant3. Exact comparison is bad for pretty much every other use case. Absolute margin comparison is when you write fabs(lhs - rhs) <= margin*. The problem with absolute margin comparison is that for large numbers, it decays to exact-comparison check, for the same reason that for large floating point numbers, assert(some_float + 1 == some_float) succeeds. Absolute margin does have 2 big advantages, the first is that it is easy enough to reason about decimally (I want to be within margin of the target), and the fact that it does not break down around 0. The disadvantage is that it is hard to reason about numerically, as the actual tolerance you get decreases with increasing lhs and rhs. Around the internet, the most commonly used relative epsilon comparison is when you write fabs(lhs - rhs) <= epsilon * max(fabs(lhs), fabs(rhs)), but sometimes the form fabs(lhs - rhs) <= epsilon * min(fabs(lhs), fabs(rhs)) is used instead. The idea is to adjust your margin automatically to the scale of your numbers. This fixes the problems absolute margin comparison has with large numbers, but introduces the same "decay to exact comparison" problem for comparisons that involve 0. With the exception of numbers that are very close to 0, this comparison is easy enough to reason about both decimally and numerically, but is not ideal because of the surprise factor of max or min in the formula above. Then there is ULP based comparison. This comparison directly operates with the fact that floating point numbers are an arbitrary subset of real numbers, and that there is a minimal distance between two floating point numbers that are representable. If two numbers are 1 ULP apart, then it means that there is no representable floating point numbers between them. If two numbers are 2 ULPs apart, then it means that there is only 1 representable floating point number between them, and so on... This obviously eliminates the scaling problem of both absolute margin and relative epsilon comparisons. However, ULP based comparisons are also the absolutely hardest to reason about decimally. On the other hand, they are perfect for numerical reasoning if you understand how floating point operations work and what can you expect from their rounding, and thus are most commonly used when you are implementing some numerical code. E.g. if you are implementing sin or other math functions -- you know what precision you have, what is the minimal precision lost from the operations you have to do, and then you assert that you did not lose more precision than was absolutely necessary. ------------------ To avoid making things too simple, people also commonly talk about machine epsilon, which is the numerical difference between 1.0 and the next higher representable value (or a value that is 1 ULP from 1.0 in the direction of positive infinity). Do not mistake this for the epsilon in relative epsilon comparison, even though they are often related (e.g. Catch2 sets relative epsilon to 100* machine epsilon of given floating point type). ------------------ * Funnily enough, this way of writing the check gives you different results from writing lhs - rhs <= margin || rhs - lhs <= margin for specific inputs. Which one you want depends on your use case.
|
# ? Oct 17, 2019 12:58 |
|
Xarn posted:Exact comparison is what happens when you write lhs == rhs. This works if you know the exact value you should get (e.g. you clamp inputs to [0., 1.]), in which case it avoids falsely accepting wrong close-but-wrong inputs. The other main case where you get an exactly the same number is if you take the same inputs and place them through the same computation, e.g. (some-number + some_constant1) * some_constant2 - some_constant3. This is one of the cases where you can get really tripped up though - just because you've written the same computation in two different parts of your program, doesn't mean that they're going to get the same result. Or heck, even if you've only written it once you could get tripped up if it ends up getting inlined into multiple places. C compilers in particular consider themselves to have free reign over doing floating point calculations at double-extended precision, and then either rounding intermediate values to double precision, or not, depending on arbitrary whims. Unless you specifically tell them not to do that.
|
# ? Oct 17, 2019 14:48 |
|
Technically, C only permits that on “operands”, i.e. within an expression as opposed to across statements, but compilers are indeed sometimes more aggressive than that, and they don’t all interpret the various options and pragmas correctly.
|
# ? Oct 17, 2019 16:06 |
|
Xarn posted:Exact comparison is what happens when you write lhs == rhs. This works if you know the exact value you should get (e.g. you clamp inputs to [0., 1.]) Can you explain this in a bit more detail? Specifically the part about clamping. My (far from expert) understanding of floating point isn't helping me get why that's significant.
|
# ? Oct 17, 2019 18:17 |
|
If you know what you are doing, and set the right compiler options (no fast floats, no vectorization) flops can be deterministic on the same machine or architecture, it's designed that way. You do have some operations that are not commutative, so parallelization needs to be strictly controlled as well. That said, dear lord avoid floats if you need deterministic simulations across architectures. Game devs famously prefer to write their own fixpoint library than do deal with float determinism https://www.youtube.com/watch?v=wwLW6CjswxM. I hear coding horror stories of national labs that whenever they make changes to their thermoniclear simulation software or hardware, they need to produce binary compatible float outputs. For some reason they cannot re-validate their simulations on real world data.
|
# ? Oct 17, 2019 18:24 |
|
Coding horror: x87 FP math.
|
# ? Oct 17, 2019 18:39 |
|
look at this horrendous error graph on ARM NEON's VRECPE http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14282.html there's no guarantees anything close to correct is going on
|
# ? Oct 17, 2019 18:50 |
|
Yeah, there is a reason why compilers prefer routines from their math library to calling CPU instructions for that functionality.OddObserver posted:Coding horror: x87 FP math. I originally had an aside for exact comparisons about the horrors of x87 and how sometimes you compare two double variables to find out that one is actual 64bit double read from memory, while the other one was just calculated in 80 bits and not yet rounded off, but decided against that. At least if you are compiling against x64, you will always use SSEs for your floating points.
|
# ? Oct 17, 2019 19:05 |
|
Steve French posted:Can you explain this in a bit more detail? Specifically the part about clamping. My (far from expert) understanding of floating point isn't helping me get why that's significant. Probably a bad word choice on my end, but the idea is that if you have code like this C++ code:
|
# ? Oct 17, 2019 19:14 |
|
It's a fast approximation instruction, of course it's going to be like that
|
# ? Oct 17, 2019 19:16 |
|
Basically there's no way programmers at large will ever be good at using floats because it requires two things they will never have: an elementary understanding of how the computer works, and the ability to do basic math please vote this post +5 insightful, thank you
|
# ? Oct 17, 2019 19:57 |
|
+4.99999999999999998
|
# ? Oct 17, 2019 19:59 |
|
We're all floats down here.
|
# ? Oct 17, 2019 20:05 |
|
I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer.
|
# ? Oct 17, 2019 21:17 |
|
Ola posted:I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer. Yes in some cases. In others they're using modern formats like BigDecimal but in old mainframes and whatever other poo poo they're running storing everything as int is the only way to guarantee no loss of precision.
|
# ? Oct 17, 2019 21:29 |
|
Ola posted:I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer. Something like this is also used in adtech, where for example the price of a singe ad impression is represented in micros, 1/1000000 of a dollar.
|
# ? Oct 17, 2019 21:32 |
|
Ola posted:I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer. As far as I know, for certain things they actually use integers to represent something like one hundredth of a cent. For stock interests and stuff those amounts might add up and be relevant. There's an infamous and most probably fake story about that, claiming that someone once hacked a bank so that for every of the billions of transactions happening each day, a sub-cent amount was taken out of the transaction and transferred from the sender to the hacker's bank account instead of to the recipient and because none of the normal bank computer views showed those precise amounts it took the bank ages to discover this.
|
# ? Oct 17, 2019 21:43 |
|
|
# ? Jun 8, 2024 10:18 |
|
Thanks, kind of hoping I made it up myself now so I actually came up with a good idea even if it was half a century late. And ad pricing reminds me of another one, I should just be able to pay Google $25 per year for an ad free internet.
|
# ? Oct 17, 2019 21:44 |