Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Soricidus
Oct 21, 2010
freedom-hating statist shill

jit bull transpile posted:

I disagree. For me as a trans woman, being called "she" is extremely affirming and being referred to by gender neutral language feels like a microaggresion denying my identify.

I totally agree with this in the real world today where gendered pronouns are standard, and I am certainly not advocating for erasing your identity as a woman.

For context though I’m nb and very uncomfortable in a world that forces me to either pretend to have a binary gender, or ask everyone I meet to make a special exception for me in their grammar.

Adbot
ADBOT LOVES YOU

Soricidus
Oct 21, 2010
freedom-hating statist shill

tankadillo posted:

I’ve only been on one Discourse forum and it seemed alright but the gamification stuff just feels unnecessary and infantilizing. “Congratulations!!! You just replied to a topic!!!” “Way to go, you just viewed a thread!!!!"

Achievement unlocked: get permabanned

CPColin
Sep 9, 2003

Big ol' smile.

Carbon dioxide posted:

I don't want to discredit anything you're saying here, I just have a completely unrelated question if you don't mind: what programming language is the text under your avatar?

jbt is YOSPOS's favorite reformed MUMPS developer.

Volguus
Mar 3, 2009

CPColin posted:

jbt is YOSPOS's favorite reformed MUMPS developer.

Oh, I've heard of MUMPS

Space Whale
Nov 6, 2014

Soricidus posted:

Achievement unlocked: get permabanned

We got a leaderboard for perma speedruns yet?

Kazinsal
Dec 13, 2011

Space Whale posted:

We got a leaderboard for perma speedruns yet?

I think the closest we have is a dude who got off a 100k hour probation and within 48 hours ate a month and a ban.

The MUMPSorceress
Jan 6, 2012


^SHTPSTS

Gary’s Answer

Carbon dioxide posted:

I don't want to discredit anything you're saying here, I just have a completely unrelated question if you don't mind: what programming language is the text under your avatar?

Mumps

Volguus posted:

Oh, I've heard of MUMPS

That post makes me cranky because it's really just talking about bad code and doesn't get into any of the best things about mumps.

You can look at my post history in the yospos terrible programmers and pl threads if you want a really good primer on the language

chippy
Aug 16, 2006

OK I DON'T GET IT
I was doing a code review the other day, which included this (variables renamed slightly for context)

code:
if (Math.abs(newExposureTime - getCurrentExposureTime()) < 1e-5) {
	return;
}
exposureTimeText.setText(Double.toString(newExposureTime));
My first comment was to ask the significance of the number 1e-5, and to suggest it be made a named constant to aid readability. I got the response "no significance, just picked a really small number to make sure the value had actually changed."

I was :psyduck:-ing at this when the change was updated:

code:
if (Math.abs(newExposureTime - getCurrentExposureTime()) > 0) {
	exposureTimeText.setText(Double.toString(newExposureTime));
}
At this point I was starting to doubt myself, and wonder if I was missing some reason why this wasn't functionally identical to !=, but I suggested it anyway, "couldn't this just be 'if (newExposureTime != getCurrentExposureTime())'?

"Yes, I think it's ok in this case."

Dude is a contractor.

chippy fucked around with this message at 14:47 on Oct 16, 2019

Hughlander
May 11, 2005

chippy posted:

I was doing a code review the other day, which included this (variables renamed slightly for context)

code:
if (Math.abs(newExposureTime - getCurrentExposureTime()) < 1e-5) {
	return;
}
exposureTimeText.setText(Double.toString(newExposureTime));
My first comment was to ask the significance of the number 1e-5, and to suggest it be made a named constant to aid readability. I got the response "no significance, just picked a really small number to make sure the value had actually changed."

I was :psyduck:-ing at this when the change was updated:

code:
if (Math.abs(newExposureTime - getCurrentExposureTime()) > 0) {
	exposureTimeText.setText(Double.toString(newExposureTime));
}
At this point I was starting to doubt myself, and wonder if I was missing some reason why this wasn't functionally identical to !=, but I suggested it anyway, "couldn't this just be 'if (newExposureTime != getCurrentExposureTime())'?

"Yes, I think it's ok in this case."

Dude is a contractor.

Probably drilled into his head at some point to never use equality with a float/double and the rest is automatic.

chippy
Aug 16, 2006

OK I DON'T GET IT

Hughlander posted:

Probably drilled into his head at some point to never use equality with a float/double and the rest is automatic.

Interesting, maybe I need to read up on this myself.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


What every computer scientist should know about floating point arithmetic

Xarn
Jun 26, 2015
I am at the point where I think an IEEE-754 quiz should pop up when you try to compile code that compares floats. If you fail, it is a compilation error.

Jewel
May 2, 2009

yeah, abs(b-a) <= VERY_SMALL_DELTA is in pretty much every codebase I've ever worked in at some point, it's pretty useful. Just checked the one I'm on now and yep

Xarn
Jun 26, 2015
It is also wrong

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
In the code as presented, == is entirely sufficient. What's the fail case? You end up stringifying a double that's almost but not quite the same as the one that's currently being displayed?

When you start getting in trouble is if you want to compute something in two different ways, but still recognize it as the same result.

Linear Zoetrope
Nov 28, 2011

A hero must cook
In fairness, absolutely no method of float comparison is (uniformly) right :v.

But yeah, that one is especially wrong (but, IIRC, only worse than "relative equality" when the values are very small). It's main issue is really the stickiness in choosing an epsilon. When using relative equality in unit tests before I've often gotten false negatives and needed to fudge the epsilon after hand-verifying the solution is correct.

E: And yes, there are cases where absolute float == equality is 100% what you want. For instance, a function that should spit back out the exact parameter it's given, or a function that should spit out a specific number under certain error conditions (e.g. a function that "saturates" above/below a specific value should return the saturated value exactly).

V Yeah, and it's not always the right tool

Linear Zoetrope fucked around with this message at 16:24 on Oct 16, 2019

Xarn
Jun 26, 2015
I don't want to make an effort post from mobile phone, but yeah, there are basically 4 different float comparisons, exact, absolute margin, relative margin and ULPs, and you need to pick which one fits your use case


I can also tell you that I'd be surprised if 1 in 100 developers who use floating point numbers understood which comparison to use when, judging by the issue tracker of my oss project (a testing framework).

chippy
Aug 16, 2006

OK I DON'T GET IT
Well I learned some stuff.

Jabor posted:

In the code as presented, == is entirely sufficient. What's the fail case? You end up stringifying a double that's almost but not quite the same as the one that's currently being displayed?

Well yeah. I probably wouldn't even bother testing the value, it's just updating a text box with it and it's not something that's happening particularly often.

chippy fucked around with this message at 16:33 on Oct 16, 2019

Ola
Jul 19, 2004

Hehe I learned some stuff too and nobody knows I didn't know it :c00l:

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

jit bull transpile posted:

Mumps

[...]

You can look at my post history in the yospos terrible programmers and pl threads if you want a really good primer on the language

itsatrap dot gif

SupSuper
Apr 8, 2009

At the Heart of the city is an Alien horror, so vile and so powerful that not even death can claim it.
In my experience, we've successfully mistrained everyone on floating points to the same degree as "secure passwords", so half the programmers I work with think floating point is just this quantum random unpredictable nightmare you can never compare.

Most times you don't want equality, you just want to know if two values are in close proximity, with a much greater tolerance than any Epsilon or ULP, so something like abs(x-y) < 0.001 is fine.
Or you just want to check if a value you set is still in there, for which equality is fine. If "x = y" then "x == y" is always true as long as they go through the same operations.

It's only when you're comparing floats that have gone through different inputs or operations that you need to bring out the "what every programmer should know about floats", and you'll still be just as confused and just use some good-enough mix of relative and absolute comparisons.

Xarn
Jun 26, 2015

SupSuper posted:

with a much greater tolerance than any Epsilon or ULP, so something like abs(x-y) < 0.001 is fine.

:eng99:

Xarn
Jun 26, 2015
Well, our VPN is still broken so here comes an effort post on floating point comparisons.

As far as I am aware, there are four different basic ways to compare floating point numbers.
  1. Exact comparison
  2. Absolute margin comparison
  3. Relative epsilon comparison
  4. ULP comparison

Exact comparison is what happens when you write lhs == rhs. This works if you know the exact value you should get (e.g. you clamp inputs to [0., 1.]), in which case it avoids falsely accepting wrong close-but-wrong inputs. The other main case where you get an exactly the same number is if you take the same inputs and place them through the same computation, e.g. (some-number + some_constant1) * some_constant2 - some_constant3.

Exact comparison is bad for pretty much every other use case.



Absolute margin comparison is when you write fabs(lhs - rhs) <= margin*. The problem with absolute margin comparison is that for large numbers, it decays to exact-comparison check, for the same reason that for large floating point numbers, assert(some_float + 1 == some_float) succeeds. Absolute margin does have 2 big advantages, the first is that it is easy enough to reason about decimally (I want to be within margin of the target), and the fact that it does not break down around 0. The disadvantage is that it is hard to reason about numerically, as the actual tolerance you get decreases with increasing lhs and rhs.


Around the internet, the most commonly used relative epsilon comparison is when you write fabs(lhs - rhs) <= epsilon * max(fabs(lhs), fabs(rhs)), but sometimes the form fabs(lhs - rhs) <= epsilon * min(fabs(lhs), fabs(rhs)) is used instead. The idea is to adjust your margin automatically to the scale of your numbers. This fixes the problems absolute margin comparison has with large numbers, but introduces the same "decay to exact comparison" problem for comparisons that involve 0. With the exception of numbers that are very close to 0, this comparison is easy enough to reason about both decimally and numerically, but is not ideal because of the surprise factor of max or min in the formula above.


Then there is ULP based comparison. This comparison directly operates with the fact that floating point numbers are an arbitrary subset of real numbers, and that there is a minimal distance between two floating point numbers that are representable. If two numbers are 1 ULP apart, then it means that there is no representable floating point numbers between them. If two numbers are 2 ULPs apart, then it means that there is only 1 representable floating point number between them, and so on... This obviously eliminates the scaling problem of both absolute margin and relative epsilon comparisons.

However, ULP based comparisons are also the absolutely hardest to reason about decimally. On the other hand, they are perfect for numerical reasoning if you understand how floating point operations work and what can you expect from their rounding, and thus are most commonly used when you are implementing some numerical code. E.g. if you are implementing sin or other math functions -- you know what precision you have, what is the minimal precision lost from the operations you have to do, and then you assert that you did not lose more precision than was absolutely necessary.

------------------

To avoid making things too simple, people also commonly talk about machine epsilon, which is the numerical difference between 1.0 and the next higher representable value (or a value that is 1 ULP from 1.0 in the direction of positive infinity). Do not mistake this for the epsilon in relative epsilon comparison, even though they are often related (e.g. Catch2 sets relative epsilon to 100* machine epsilon of given floating point type).

------------------

* Funnily enough, this way of writing the check gives you different results from writing lhs - rhs <= margin || rhs - lhs <= margin for specific inputs. Which one you want depends on your use case.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Xarn posted:

Exact comparison is what happens when you write lhs == rhs. This works if you know the exact value you should get (e.g. you clamp inputs to [0., 1.]), in which case it avoids falsely accepting wrong close-but-wrong inputs. The other main case where you get an exactly the same number is if you take the same inputs and place them through the same computation, e.g. (some-number + some_constant1) * some_constant2 - some_constant3.

This is one of the cases where you can get really tripped up though - just because you've written the same computation in two different parts of your program, doesn't mean that they're going to get the same result. Or heck, even if you've only written it once you could get tripped up if it ends up getting inlined into multiple places.

C compilers in particular consider themselves to have free reign over doing floating point calculations at double-extended precision, and then either rounding intermediate values to double precision, or not, depending on arbitrary whims. Unless you specifically tell them not to do that.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
Technically, C only permits that on “operands”, i.e. within an expression as opposed to across statements, but compilers are indeed sometimes more aggressive than that, and they don’t all interpret the various options and pragmas correctly.

Steve French
Sep 8, 2003

Xarn posted:

Exact comparison is what happens when you write lhs == rhs. This works if you know the exact value you should get (e.g. you clamp inputs to [0., 1.])

Can you explain this in a bit more detail? Specifically the part about clamping. My (far from expert) understanding of floating point isn't helping me get why that's significant.

Beef
Jul 26, 2004
If you know what you are doing, and set the right compiler options (no fast floats, no vectorization) flops can be deterministic on the same machine or architecture, it's designed that way. You do have some operations that are not commutative, so parallelization needs to be strictly controlled as well.

That said, dear lord avoid floats if you need deterministic simulations across architectures. Game devs famously prefer to write their own fixpoint library than do deal with float determinism https://www.youtube.com/watch?v=wwLW6CjswxM.

I hear coding horror stories of national labs that whenever they make changes to their thermoniclear simulation software or hardware, they need to produce binary compatible float outputs. For some reason they cannot re-validate their simulations on real world data.

OddObserver
Apr 3, 2009
Coding horror: x87 FP math.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
look at this horrendous error graph on ARM NEON's VRECPE http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14282.html

there's no guarantees anything close to correct is going on

Xarn
Jun 26, 2015
Yeah, there is a reason why compilers prefer routines from their math library to calling CPU instructions for that functionality.

OddObserver posted:

Coding horror: x87 FP math.

I originally had an aside for exact comparisons about the horrors of x87 and how sometimes you compare two double variables to find out that one is actual 64bit double read from memory, while the other one was just calculated in 80 bits and not yet rounded off, but decided against that. At least if you are compiling against x64, you will always use SSEs for your floating points. :v:

Xarn
Jun 26, 2015

Steve French posted:

Can you explain this in a bit more detail? Specifically the part about clamping. My (far from expert) understanding of floating point isn't helping me get why that's significant.

Probably a bad word choice on my end, but the idea is that if you have code like this

C++ code:
double event_probability(std::vector<double> const& observation_probability) {
    double probability = 1.;
    for (double prob : observation_probability) {
        probability *= prob;
    }
    // nobody like denormal numbers.
    if (std::isnormal(probability)) { return probability; }
    return 0.;
}
you do not have to check for the return value being close to 0, as you know that you just return 0. when it comes to it.

Spatial
Nov 15, 2007

It's a fast approximation instruction, of course it's going to be like that

Spatial
Nov 15, 2007

Basically there's no way programmers at large will ever be good at using floats because it requires two things they will never have: an elementary understanding of how the computer works, and the ability to do basic math

please vote this post +5 insightful, thank you

CPColin
Sep 9, 2003

Big ol' smile.
+4.99999999999999998

iospace
Jan 19, 2038


We're all floats down here.

Ola
Jul 19, 2004

I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer.

The MUMPSorceress
Jan 6, 2012


^SHTPSTS

Gary’s Answer

Ola posted:

I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer.

Yes in some cases. In others they're using modern formats like BigDecimal but in old mainframes and whatever other poo poo they're running storing everything as int is the only way to guarantee no loss of precision.

Votlook
Aug 20, 2005

Ola posted:

I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer.

Something like this is also used in adtech, where for example the price of a singe ad impression is represented in micros, 1/1000000 of a dollar.

Carbon dioxide
Oct 9, 2012

Ola posted:

I don't remember how I heard this, maybe I dreamed it up myself, but is it true that banks etc do calculation and storage using integers only as cents? So $100.07 is stored as 10007 cents and only converted into dollars and cents in the presentation layer.

As far as I know, for certain things they actually use integers to represent something like one hundredth of a cent. For stock interests and stuff those amounts might add up and be relevant.

There's an infamous and most probably fake story about that, claiming that someone once hacked a bank so that for every of the billions of transactions happening each day, a sub-cent amount was taken out of the transaction and transferred from the sender to the hacker's bank account instead of to the recipient and because none of the normal bank computer views showed those precise amounts it took the bank ages to discover this.

Adbot
ADBOT LOVES YOU

Ola
Jul 19, 2004

Thanks, kind of hoping I made it up myself now so I actually came up with a good idea even if it was half a century late. And ad pricing reminds me of another one, I should just be able to pay Google $25 per year for an ad free internet.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply