Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Xerophyte
Mar 17, 2008

This space intentionally left blank
x != x is a common way to test for NaNs as they are the only floats with that behavior. NaNs are also not greater than, less than, greater or equal, or less or equal to any other floats. The idea is that equals and other comparators for floats are there to impose an ordering, and NaNs can fundamentally not be placed anywhere in that ordering. All the ordering operators therefore return false.

This also has the benefit of avoiding sqrt(-2) == log(-2) and similar evaluating to true. [E:] Incidentally, 1/0 == 2/0 and similar may evaluate to true because infinites of the same sign do compare as equal as they are part of the floating point ordering. I'm not saying that you should rely on non-finite values behaving in any particular nice ways.

Side question: if you want NaN == NaN, would you also want qNaN == sNaN?

Xerophyte fucked around with this message at 14:51 on Feb 22, 2022

Adbot
ADBOT LOVES YOU

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Xerophyte posted:

x != x is a common way to test for NaNs as they are the only floats with that behavior. NaNs are also not greater than, less than, greater or equal, or less or equal to any other floats. The idea is that equals and other comparators for floats are there to impose an ordering, and NaNs can fundamentally not be placed anywhere in that ordering. All the ordering operators therefore return false.

This also has the benefit of avoiding sqrt(-2) == log(-2) and similar evaluating to true.

Side question: if you want NaN == NaN, would you also want qNaN == sNaN?

Don't forget -NaN

LOOK I AM A TURTLE
May 22, 2003

"I'm actually a tortoise."
Grimey Drawer

Bonfire Lit posted:

Beats me! I've elected not to check source control because I suspect if I do I'll just get salty at whoever wrote that and whoever reviewed it afterwards.

Was the project originally pure JS? If it was, I think we can excuse whoever converted it to TS for not wanting to risk changing the output of the function, since at that point they may not yet have had full confidence that they could track down all usages of the function.

If the function arrived ex nihilo with the number | boolean return type in the signature then that's completely absurd.

Tei
Feb 19, 2011

Xerophyte posted:

x != x is a common way to test for NaNs as they are the only floats with that behavior. NaNs are also not greater than, less than, greater or equal, or less or equal to any other floats. The idea is that equals and other comparators for floats are there to impose an ordering, and NaNs can fundamentally not be placed anywhere in that ordering. All the ordering operators therefore return false.

This also has the benefit of avoiding sqrt(-2) == log(-2) and similar evaluating to true. [E:] Incidentally, 1/0 == 2/0 and similar may evaluate to true because infinites of the same sign do compare as equal as they are part of the floating point ordering. I'm not saying that you should rely on non-finite values behaving in any particular nice ways.

Side question: if you want NaN == NaN, would you also want qNaN == sNaN?

If is the standarised behavior, more things would break than fix if you make === true for nans, but is unfortunate. === mean "the left value and the right value are the same thing", and the more exceptions to that, the worse.

Xerophyte
Mar 17, 2008

This space intentionally left blank

Tei posted:

If is the standarised behavior, more things would break than fix if you make === true for nans, but is unfortunate. === mean "the left value and the right value are the same thing", and the more exceptions to that, the worse.

"The same thing" just isn't that straight-forward when it comes to floats. 0b0111 1111 1000 0000 0000 0000 0000 0000 and 0b1111 1111 1101 0101 0101 0101 0101 0101 are both NaNs, for instance. One of them is (probably -- IEE754 allows implementations to decide how signaling and quiet NaNs are defined. Also I am not super sure of NaN bit patterns) a 32 bit qNaN with a negative sign, the other an sNaN with positive sign. They also have different bit payloads that may or may not be significant, depending on the application, platform, etc.

Other fun examples are the Denormals Are Zero and Flush To Zero flags, which can be used to change your CPUs comparison and operator behaviors for numbers very close to zero on a per-instruction basis. They're very useful since denormals are, as a rule, made of pure evil. Floating point is a land full of adventures that'll eventually land you in this thread.

Xerophyte fucked around with this message at 15:34 on Feb 22, 2022

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Xerophyte posted:

"The same thing" just isn't that straight-forward when it comes to floats. 0b0111 1111 1000 0000 0000 0000 0000 0000 and 0b1111 1111 1101 0101 0101 0101 0101 0101 are both NaNs, for instance. One of them is (probably -- IEE754 allows implementations to decide how signaling and quiet NaNs are defined. Also I am not super sure of NaN bit patterns) a 32 bit qNaN with a negative sign, the other an sNaN with positive sign. They also have different bit payloads that may or may not be significant, depending on the application, platform, etc.

Other fun examples are the Denormals Are Zero and Flush To Zero flags, which can be used to change your CPUs comparison and operator behaviors for numbers very close to zero on a per-instruction basis. They're very useful since denormals are, as a rule, made of pure evil. Floating point is a land full of adventures that'll eventually land you in this thread.

I'm still not sure anyone ever wanted float's mixed precision. I'm not aware of anyone who uses both nanometer and megameter lengths in the same model. IMO fixed point has always been better. Only issue with that is all the optimized float compute hardware.

Though there could have been a world where that was parallel integer computations with a register defined fixed point.

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

leper khan posted:

I'm still not sure anyone ever wanted float's mixed precision. I'm not aware of anyone who uses both nanometer and megameter lengths in the same model. IMO fixed point has always been better. Only issue with that is all the optimized float compute hardware.

Though there could have been a world where that was parallel integer computations with a register defined fixed point.

Source your quotes

Volte
Oct 4, 2004

woosh woosh

leper khan posted:

I'm still not sure anyone ever wanted float's mixed precision. I'm not aware of anyone who uses both nanometer and megameter lengths in the same model. IMO fixed point has always been better. Only issue with that is all the optimized float compute hardware.
Sometimes floats are used for storing things other than lengths

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Volte posted:

Sometimes floats are used for storing things other than lengths

Ok, nanosecond and fortnight.

A 1ml water and the volume of the ocean.

The cultural value of the forums and the collected works of hp Lovecraft.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


So why do you think floating point was introduced and became the standard?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

ultrafilter posted:

So why do you think floating point was introduced and became the standard?

Because we live in the bad timeline

ExcessBLarg!
Sep 1, 2001
NaNs are weird on the surface, but they're nice in that they give a defined way to deal with undefined math that doesn't result in a processor level exception.

Compare what happens when you do an integer divide by zero ("0/0"): the CPU throws a divide-by-zero interrupt, which calls the divide_error trap handler in your kernel, which then has to figure out which running process was doing Bad Math and send a SIGFPE signal (ironically named for a "floating-point exception" but also sent for fixed math here) to the process. If your application is written in C it probably doesn't handle SIGFPE and dumps core. If your application is written in a VM-based language then the signal will be caught and a language-level exception (ArithmeticException, ZeroDivisionError, etc.) will be thrown, which again your application probably doesn't catch so it still bubbles up to the default exception handler and dumps a stack trace, and the user is left wondering WT-genuine-F.

In contrast, doing "0.0/0.0" generates a NaN which is handled sensibly (if uselessly) by subsequent floating point calculations until your result printf shows "NaN" to the user. They're still wondering WTF, but you've saved a few trees along the way, and they can still fix their bad Excel math or whatever.

OddObserver
Apr 3, 2009

ExcessBLarg! posted:


In contrast, doing "0.0/0.0" generates a NaN which is handled sensibly (if uselessly) by subsequent floating point calculations until your result printf shows "NaN" to the user. They're still wondering WTF, but you've saved a few trees along the way, and they can still fix their bad Excel math or whatever.

Or potentially killed more than a few trees along the way since you kept going and going and computing and computing and ended up with a NaN in the end.

ExcessBLarg!
Sep 1, 2001
Well then you should check for NaNs more often.

Volte
Oct 4, 2004

woosh woosh

leper khan posted:

Ok, nanosecond and fortnight.

A 1ml water and the volume of the ocean.

The cultural value of the forums and the collected works of hp Lovecraft.
:psyduck: Physical computations, 3D graphics, statistical modelling? Storing the difference between two very large but close-together numbers? Capturing exponential growth?

What would your ideal fixed point scaling factor be? How much range and precision is enough for literally every computation you'd ever need to do? Keep in mind that the scaling factor isn't stored with the magnitude and has to be known independently (otherwise it's just decimal floating point). How would you write generic algorithms that could operate on fixed-point numbers of any magnitude?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Volte posted:

:psyduck: Physical computations, 3D graphics, statistical modelling? Storing the difference between two very large but close-together numbers? Capturing exponential growth?

What would your ideal fixed point scaling factor be? How much range and precision is enough for literally every computation you'd ever need to do? Keep in mind that the scaling factor isn't stored with the magnitude and has to be known independently (otherwise it's just decimal floating point). How would you write generic algorithms that could operate on fixed-point numbers of any magnitude?

I think my point is there isn't one answer to those questions. But that doesn't mean I like floating point as a solution.

As mentioned before, I would store the position of the binary point in a register.

I've done 3d cg using this methodology. It works fine.

When working with fixed point numbers of differing orders, you could have additional methods that take both point locations and convert. If you're cheeky and don't terribly care about performance, you could do this losslessly (for the computation anyway; results are limited to their representation obviously). Where I've needed to do this, I've been willing to take some amount of error and just shift.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
Also, like... Floats are explicitly bad for storing the difference between to large but close numbers. You lose precision as values increase.

Volte
Oct 4, 2004

woosh woosh

leper khan posted:

I think my point is there isn't one answer to those questions. But that doesn't mean I like floating point as a solution.

As mentioned before, I would store the position of the binary point in a register.

I've done 3d cg using this methodology. It works fine.

When working with fixed point numbers of differing orders, you could have additional methods that take both point locations and convert. If you're cheeky and don't terribly care about performance, you could do this losslessly. Where I've needed to do this, I've been willing to take some amount of error and just shift.
What do you mean store it in a register? You mean that an fma operation needs six registers now? And what about when it's on disk? If you're storing the scaling factor alongside the magnitude, that's not fixed point, it's floating point.

As far as 3D working fine with fixed point, accumulated error is a huge problem, especially when you get into simulation. Look at the geometry in a Playstation game sometime.

leper khan posted:

Also, like... Floats are explicitly bad for storing the difference between to large but close numbers. You lose precision as values increase.
No, they are good for storing such a difference. You're probably thinking of the fact that subtracting two very large numbers in an algorithm should be avoided for the reason you stated. There's no method in the world where you can go from a very high magnitude to a very low magnitude relative to a fixed amount of storage and not lose precision, but at least with floats you have an error bound and can determine if the loss of precision is catastrophic or not. With fixed point you're probably just going to get zero, or else not even be able to represent the input operands in the first place.

Doc Hawkins
Jun 15, 2010

Dashing? But I'm not even moving!


thankfully there is an efficient decimal number type we can just all switch to

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Volte posted:

What do you mean store it in a register? You mean that an fma operation needs six registers now? And what about when it's on disk? If you're storing the scaling factor alongside the magnitude, that's not fixed point, it's floating point.

As far as 3D working fine with fixed point, accumulated error is a huge problem, especially when you get into simulation. Look at the geometry in a Playstation game sometime.

No, they are good for storing such a difference. You're probably thinking of the fact that subtracting two very large numbers in an algorithm should be avoided for the reason you stated. There's no method in the world where you can go from a very high magnitude to a very low magnitude relative to a fixed amount of storage and not lose precision, but at least with floats you have an error bound and can determine if the loss of precision is catastrophic or not. With fixed point you're probably just going to get zero, or else not even be able to represent the input operands in the first place.

No, it's fixed point. It just so happens that a generic fixed point library needs to account for any location of the binary point. If you don't want to burn a register, you could always write the methods using templates and just generate tons of code. There is always a fixed amount of precision within a representation. It's not floating point lmbo.

PlayStation games look good.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
If fixed point was the standard it would come with it's own set of headaches.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

HappyHippo posted:

If fixed point was the standard it would come with it's own set of headaches.

I can still want those headaches more than the ones I have now :smith:

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
I rarely hit the floating point headaches, despite how much bandwidth they take up in this thread.

But regardless, it would be nice if languages provided fixed point types and operations by default, and if they had good support in hardware (in addition to the floating point types already available).

Absurd Alhazred
Mar 27, 2010

by Athanatos
I could swear there was a library for some game engine that did 64-bit fixed point. Unity, maybe?

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?
I see the coding horror is coming from inside the thread again

Absurd Alhazred
Mar 27, 2010

by Athanatos
What's funny is that, while on paper fixed point operations should be cheaper, Intel processors at least have so much dedicated floating point circuitry that it ends up being slower to replace floating point operations with fixed point even when it's straightforward.

Volte
Oct 4, 2004

woosh woosh
Fixed point operations are basically just integer operations, you don't really need much hardware support for them. It can be nice to have some language support for them though.

leper khan posted:

No, it's fixed point. It just so happens that a generic fixed point library needs to account for any location of the binary point. If you don't want to burn a register, you could always write the methods using templates and just generate tons of code. There is always a fixed amount of precision within a representation. It's not floating point lmbo.
If you're using an additional value (in a register or otherwise) to dynamically control the scaling factor, then that's part of the representation and it can't really said to have a fixed amount of precision. And why would generating a ton of extra code to do the same thing be a good thing? And it still doesn't address the issue of being able to dynamically scale numbers, particularly for intermediate value in larger computations. In the vast majority of cases you're not going to be running up against the precision limits of floating point at either end, so sliding that scale up and down isn't going to be particularly noticeable in terms of precision lost, and if you so desire, you can figure out the exact error bounds. The killer feature of floating point is not having to decide between range and precision in advance. It's a trade off, but an important one.

Most of the time floating point headaches come from not understand what a floating point number is and expecting it to behave like a real number, but fixed point isn't going to help much in that regard beyond not even allowing you to do input the problematic values in the first place. I don't think I've ever actually run into a headache while working with floating-point numbers that was the fault of the representation, and I once had to write an entire math.h replacement library for a bespoke IBM vector architecture with very strict error bounds for all inputs. In fact the representation made a lot of those functions very nice and convenient to compute, because being able to compute f(2^x + m) where x and m are both relatively small can often be conveniently decomposed and computed with polynomials and lookup tables.

leper khan posted:

PlayStation games look good.
It's okay if you like your polygons wobbly but bridges and rockets, not so much.

Spatial
Nov 15, 2007

PSX geometry quality varied a lot. The GPU had no geometry subpixel precision whatsoever and the entire vertex transform was software, so you got a lot of implementations that didn't use fixed point and just used bare integers. This gives you seams between triangles, makes different vertexes on the same triangle snap to the pixel grid on different frames, etc. Pure wobble vision. And on top of all that you've got no perspective correct texture sampling so the textures wobble like crazy too. They really pinched the pennies on the PSX GPU lol

CPColin
Sep 9, 2003

Big ol' smile.
I remember in high school a classmate of mine was highlighting parts of a book on 3D graphics programming and when I asked what they were doing, they said, "I'm going to rewrite all these parts that use floats to use integers because floats are slow." Wonder what they're up to these days.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

CPColin posted:

I remember in high school a classmate of mine was highlighting parts of a book on 3D graphics programming and when I asked what they were doing, they said, "I'm going to rewrite all these parts that use floats to use integers because floats are slow." Wonder what they're up to these days.

There was a time where floats incurred a significant cost to use at all. But as mentioned up thread that time is gone.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Volte posted:

Fixed point operations are basically just integer operations, you don't really need much hardware support for them. It can be nice to have some language support for them though.

If you're using an additional value (in a register or otherwise) to dynamically control the scaling factor, then that's part of the representation and it can't really said to have a fixed amount of precision. And why would generating a ton of extra code to do the same thing be a good thing? And it still doesn't address the issue of being able to dynamically scale numbers, particularly for intermediate value in larger computations. In the vast majority of cases you're not going to be running up against the precision limits of floating point at either end, so sliding that scale up and down isn't going to be particularly noticeable in terms of precision lost, and if you so desire, you can figure out the exact error bounds. The killer feature of floating point is not having to decide between range and precision in advance. It's a trade off, but an important one.

Most of the time floating point headaches come from not understand what a floating point number is and expecting it to behave like a real number, but fixed point isn't going to help much in that regard beyond not even allowing you to do input the problematic values in the first place. I don't think I've ever actually run into a headache while working with floating-point numbers that was the fault of the representation, and I once had to write an entire math.h replacement library for a bespoke IBM vector architecture with very strict error bounds for all inputs. In fact the representation made a lot of those functions very nice and convenient to compute, because being able to compute f(2^x + m) where x and m are both relatively small can often be conveniently decomposed and computed with polynomials and lookup tables.

It's okay if you like your polygons wobbly but bridges and rockets, not so much.

Lmao. Ok. Pretend I wrote that I'll write a separate library for every binary point possible that I can use for each appropriate context.

CPColin
Sep 9, 2003

Big ol' smile.

leper khan posted:

There was a time where floats incurred a significant cost to use at all. But as mentioned up thread that time is gone.

And clearly the person to solve the problem back in 1995 was a high school freshman armed with a highlighter.

Athas
Aug 6, 2007

fuck that joker
Impressive amount of uninformed number opinions in this thread right now.

IEEE 754 isn't perfect, but programming is full of people who rage against floating point because they got screwed over by a minor roundoff error at some point, without realising how often it Just Works for code written by programmers with no idea about numerical issues.

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

CPColin posted:

And clearly the person to solve the problem back in 1995 was a high school freshman armed with a highlighter.

I’m sure Mr No-one-ever-needed-to-sum-a-bunch-of-small-numbers-that-totalled-a-much-bigger-number helped out as well

UraniumAnchor
May 21, 2006

Not a walrus.
Time is a flat circle.

UraniumAnchor posted:

Not specifically a coding horror but heard from a CS student in the hallway:

quote:

Why do we need hardware floating point? Computers are fast enough to do arbitrary precision anyway!

:eng99:

BigPaddy
Jun 30, 2008

That night we performed the rite and opened the gate.
Halfway through, I went to fix us both a coke float.
By the time I got back, he'd gone insane.
Plus, he'd left the gate open and there was evil everywhere.


The definition of just throwing hardware at a software problem.

Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

UraniumAnchor posted:

Time is a flat circle.

It’s just the wheel of reincarnation (PDF). Here's a video on the new Unreal gaming engine where they explain their rendering algorithms, and they are actually falling back to software rasterization in some cases.

(Should start around 1:07.)
https://www.youtube.com/watch?v=TMorJX3Nj6U&t=4050s

Absurd Alhazred
Mar 27, 2010

by Athanatos

Zopotantor posted:

It’s just the wheel of reincarnation (PDF). Here's a video on the new Unreal gaming engine where they explain their rendering algorithms, and they are actually falling back to software rasterization in some cases.

(Should start around 1:07.)
https://www.youtube.com/watch?v=TMorJX3Nj6U&t=4050s

Small triangles are just larger pixels, that's all, after all.

Dylan16807
May 12, 2010

leper khan posted:

Lmao. Ok. Pretend I wrote that I'll write a separate library for every binary point possible that I can use for each appropriate context.

So let's say you write a program using those libraries, doing the analysis to make sure your fixed point numbers won't overflow. Then someone comes along, deletes your library, and replaces everything with floats that have the same number of mantissa bits.

The output should now be just as good, if not better.

The part that made your outputs high quality wasn't using fixed point, it was caring enough about your code to make sure it would have enough bits for the precision you need. Floating point costs an extra ~10 bits to store the exponent, and it needs more sophisticated hardware, but if you can easily afford those then it's not going to hurt your accuracy.

Adbot
ADBOT LOVES YOU

Polio Vax Scene
Apr 5, 2009



we all float down here (whether you like it or not)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply