Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Dylan16807 posted:

So let's say you write a program using those libraries, doing the analysis to make sure your fixed point numbers won't overflow. Then someone comes along, deletes your library, and replaces everything with floats that have the same number of mantissa bits.

The output should now be just as good, if not better.

The part that made your outputs high quality wasn't using fixed point, it was caring enough about your code to make sure it would have enough bits for the precision you need. Floating point costs an extra ~10 bits to store the exponent, and it needs more sophisticated hardware, but if you can easily afford those then it's not going to hurt your accuracy.

If I need the full domain of my fixed point set, accuracy would be impacted. This follows from the pigeon hole principle, since floats have multiple representations for the same value. The loss of generality has to go somewhere, especially since floats _also_ increase the range.

I don't understand how you could think you won't lose accuracy. Granted, I'm very unlikely to actually care in any real scenario (e.g. it's unlikely I would need the full domain of input values available).

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Here's an idea: instead of using all 64 bits as a fixed-point representation, we could use 53 bits as a fixed-point representation, and then use a few of the extra bits to tell you what the scale is and where the decimal point goes.

Dylan16807
May 12, 2010

leper khan posted:

If I need the full domain of my fixed point set, accuracy would be impacted. This follows from the pigeon hole principle, since floats have multiple representations for the same value. The loss of generality has to go somewhere, especially since floats _also_ increase the range.

I don't understand how you could think you won't lose accuracy. Granted, I'm very unlikely to actually care in any real scenario (e.g. it's unlikely I would need the full domain of input values available).

The part where I said "same number of mantissa bits" is how I think you won't lose accuracy.

CPColin
Sep 9, 2003

Big ol' smile.

Jabor posted:

Here's an idea: instead of using all 64 bits as a fixed-point representation, we could use 53 bits as a fixed-point representation, and then use a few of the extra bits to tell you what the scale is and where the decimal point goes.

:hmm.yes00000000000001:

Volte
Oct 4, 2004

woosh woosh

leper khan posted:

floats have multiple representations for the same value.
What?

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

There's a canonical representation, and other representations. Play around with the mantissa and exponents, you should be able to get a reasonable number of representations for 2. There's also a pretty large number of representation of NaNs, assuming you don't care about the payload/etc.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

leper khan posted:

There's a canonical representation, and other representations. Play around with the mantissa and exponents, you should be able to get a reasonable number of representations for 2. There's also a pretty large number of representation of NaNs, assuming you don't care about the payload/etc.

2 has exactly one representation: sign bit unset, exponent of 128, mantissa of 0.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Jabor posted:

2 has exactly one representation: sign bit unset, exponent of 128, mantissa of 0.

gently caress you're going to make me look up some non canonical poo poo, aren't you :smith:

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
1? Should be able to have any exponent value.

E: really don't feel like breaking out a notebook or googling it

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

leper khan posted:

1? Should be able to have any exponent value.

E: really don't feel like breaking out a notebook or googling it

The most significant bit of the mantissa isn't actually stored, it's just always 1. A consequence of this is that the range of values represented by each exponent don't overlap at all.

The only representation of 1 is the representation of 2 with the exponent reduced by 1.

Volte
Oct 4, 2004

woosh woosh

leper khan posted:

1? Should be able to have any exponent value.

E: really don't feel like breaking out a notebook or googling it
There's literally no number in IEEE754 with multiple representations, unless you count positive and negative zero or NaN (which is, by definition, not a number). It's a pretty important property of IEEE754.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
:effort: to check if there's overlap in normal and subnormal values

Volte
Oct 4, 2004

woosh woosh

leper khan posted:

:effort: to check if there's overlap in normal and subnormal values
There isn't.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Volte posted:

There isn't.

Cool. Just the bunches of NaN and a few infinities and zeroes.

Foxfire_
Nov 8, 2010

Polio Vax Scene posted:

we all float down here (whether you like it or not)

I don't Microprocessor I'm working on doesn't have a FPU

OddObserver
Apr 3, 2009
One thing related to the discussion I don't know which I hope someone else might: suppose you have a numerically-stable FP algorithm that you want to use on an input with n meaningful bits. How much underlying precision do you need in the FP type? Is it related to how well-conditioned the problem is? (Or, alternatively, if you have IEEE doubles/singles/whatever, what's the highest input precision you can handle?)

ExcessBLarg!
Sep 1, 2001
It's almost as if domain experts had this same conversation nearly 40 years ago based on their experiences with earlier non-standardized floating point formats.

Dylan16807
May 12, 2010

leper khan posted:

Cool. Just the bunches of NaN and a few infinities and zeroes.

There's only two infinities, positive and negative. Exponent all 1s and significand all 0s is Infinity, exponent all 1s and significand anything else is NaN.

The only two different things that are equal are zero and negative zero, and even then they're not two bit patterns representing the same number. You get different results if you divide by them.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

OddObserver posted:

One thing related to the discussion I don't know which I hope someone else might: suppose you have a numerically-stable FP algorithm that you want to use on an input with n meaningful bits. How much underlying precision do you need in the FP type? Is it related to how well-conditioned the problem is? (Or, alternatively, if you have IEEE doubles/singles/whatever, what's the highest input precision you can handle?)

Depends on the operations. Adds/subs are notably bad.

https://floating-point-gui.de/errors/propagation/
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Dylan16807 posted:

There's only two infinities, positive and negative. Exponent all 1s and significand all 0s is Infinity, exponent all 1s and significand anything else is NaN.

The only two different things that are equal are zero and negative zero, and even then they're not two bit patterns representing the same number. You get different results if you divide by them.

0 and -0 and two representations for the number 0 hth

redleader
Aug 18, 2005

Engage according to operational parameters
languages shouldn't give you floating point operations by default. you should at least have to import them

QuarkJets
Sep 8, 2008

Matlab in the early-to-mid-2010s had an Int64 type but defined no operators for it, I find this approach to numerical computation very zen

more falafel please
Feb 26, 2005

forums poster

If you're talking about IEEE float operations being slow without talking about vectorization you're being ridiculous.

If you're talking about IEEE floats having behavior that needs to be learned before they can be used effectively, you're right. I'm sure the solution to that is to write a "variable fixed point" implementation without having first understood the behavior of IEEE and the reasoning behind the design of the standard.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

more falafel please posted:

If you're talking about IEEE float operations being slow without talking about vectorization you're being ridiculous.

If you're talking about IEEE floats having behavior that needs to be learned before they can be used effectively, you're right. I'm sure the solution to that is to write a "variable fixed point" implementation without having first understood the behavior of IEEE and the reasoning behind the design of the standard.

Mostly I was talking about floats to procrastinate writing some process docs. Worked really well.

Ola
Jul 19, 2004

Floats are untenable.

https://www.youtube.com/watch?v=7iHsjSHKT70

redleader
Aug 18, 2005

Engage according to operational parameters

more falafel please posted:

If you're talking about IEEE float operations being slow without talking about vectorization you're being ridiculous.

If you're talking about IEEE floats having behavior that needs to be learned before they can be used effectively, you're right. I'm sure the solution to that is to write a "variable fixed point" implementation without having first understood the behavior of IEEE and the reasoning behind the design of the standard.

agreed, floats are too dangerous to use without proper knowledge. this is why they shouldn't be available out-of-the-box in plangs by default

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

QuarkJets posted:

Matlab in the early-to-mid-2010s had an Int64 type but defined no operators for it, I find this approach to numerical computation very zen

IIRC there were operators defined for it, but only between an int and another int. Even today if you try to multiply a float and an int Matlab will stop you and ask wtf you think you are doing

Athas
Aug 6, 2007

fuck that joker

DoctorTristan posted:

Even today if you try to multiply a float and an int Matlab will stop you and ask wtf you think you are doing

This is the way. Implicit type casts are dangerous.

Also, I suggest that people who want to know more about FP, already know a lot about FP, or long ago figured out that FP is bullshit and we should just use fixed point and who needs nonlinear functions anyway, to read some of William Kahans papers, presentations, and essays. A lot of it is dense material on numerical methods (but still readable relative to the genre), but there's also a lot about teaching floating point tricks and doing numbers properly. William Kahan seems to have spent the past thirty years being angry at people doing floating point wrong or spreading misinformation about it (I guess this is why Gustafson talked about the "Wrath of Kahan").

Xarn
Jun 26, 2015

Athas posted:

This is the way. Implicit type casts are dangerous.

I agree, but sometimes you run into a lovely language without generics, it defines Math.Max on float64 only, I end up writing

Go code:
return float32(Math.Max(float64(fullDuration), float64(segmentEnd + 1))
and maybe implicit type casts are good? :v:

(I genuinely don't mind implicit type casts that keep full information, like int -> long, ieee float -> double, even though they can sometimes murder performance.)

Athas
Aug 6, 2007

fuck that joker

Xarn posted:

(I genuinely don't mind implicit type casts that keep full information, like int -> long, ieee float -> double, even though they can sometimes murder performance.)

I have spent too much of my life chasing down performance bugs because some C code dared to call sqrt(x) where x was a float, because sqrt() is defined for doubles, and so this involves a silent expansion to double precision and a full double precision square root. (The solution is to use sqrtf() instead.)

Any amount of life spent chasing down this stuff is too much.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

leper khan posted:

0 and -0 and two representations for the number 0 hth

I think what +0 really means is "a number between 0 and 2 ^ -126", and -0 really means "a number between - (2 ^ -126) and 0". Or maybe -127 depending how you do rounding.

smackfu
Jun 7, 2004

CPColin posted:

And clearly the person to solve the problem back in 1995 was a high school freshman armed with a highlighter.

I took a computer graphics class in college in the late 90s and most of our textbook was devoted to deriving integer versions of various rendering algorithms. A lost art.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Athas posted:

This is the way. Implicit type casts are dangerous.

Also, I suggest that people who want to know more about FP, already know a lot about FP, or long ago figured out that FP is bullshit and we should just use fixed point and who needs nonlinear functions anyway, to read some of William Kahans papers, presentations, and essays. A lot of it is dense material on numerical methods (but still readable relative to the genre), but there's also a lot about teaching floating point tricks and doing numbers properly. William Kahan seems to have spent the past thirty years being angry at people doing floating point wrong or spreading misinformation about it (I guess this is why Gustafson talked about the "Wrath of Kahan").

I'm not sure if this will be healthy for me to read, but I'll probably do it anyway.

Athas posted:

I have spent too much of my life chasing down performance bugs because some C code dared to call sqrt(x) where x was a float, because sqrt() is defined for doubles, and so this involves a silent expansion to double precision and a full double precision square root. (The solution is to use sqrtf() instead.)

Any amount of life spent chasing down this stuff is too much.

:yeah:

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.
I read the 81 pages of the java fp hurts everyone paper and it didn't make me want to not stan fixed point.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

smackfu posted:

I took a computer graphics class in college in the late 90s and most of our textbook was devoted to deriving integer versions of various rendering algorithms. A lost art.

What was the book? :allears:

smackfu
Jun 7, 2004

Not sure if it was the same book (cover looks different) but this is the basic idea:

Practical Algorithms for 3D Computer Graphics
By: R. Stuart Ferguson

CPColin
Sep 9, 2003

Big ol' smile.
Relieved to see it wasn't written by said classmate

Absurd Alhazred
Mar 27, 2010

by Athanatos
I mean, if you want to do fancy 3D stuff targeting NES, Game Boy, etc (there is a growing (!?!?!) market for new games in reproduction carts), that can be useful to know.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Absurd Alhazred posted:

I mean, if you want to do fancy 3D stuff targeting NES, Game Boy, etc (there is a growing (!?!?!) market for new games in reproduction carts), that can be useful to know.

The market is small, but it's possible to sell roughly the same volume as you're likely to get on steam/etc (and you should port there as well).

Adbot
ADBOT LOVES YOU

Polio Vax Scene
Apr 5, 2009



Athas posted:

I have spent too much of my life chasing down performance bugs because some C code dared to call sqrt(x) where x was a float, because sqrt() is defined for doubles, and so this involves a silent expansion to double precision and a full double precision square root. (The solution is to use sqrtf() instead.)

Any amount of life spent chasing down this stuff is too much.

my first reaction was "well then the sqrt should check for float input and automatically call sqrtf and only do the expansion if explicitly requested in the call" but now I'm thinking it's probably better to do the opposite and live with hunting down the performance impact and not cause something to explode at NASA

then again slow performance could also cause something to explode at NASA

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply