|
Dylan16807 posted:So let's say you write a program using those libraries, doing the analysis to make sure your fixed point numbers won't overflow. Then someone comes along, deletes your library, and replaces everything with floats that have the same number of mantissa bits. If I need the full domain of my fixed point set, accuracy would be impacted. This follows from the pigeon hole principle, since floats have multiple representations for the same value. The loss of generality has to go somewhere, especially since floats _also_ increase the range. I don't understand how you could think you won't lose accuracy. Granted, I'm very unlikely to actually care in any real scenario (e.g. it's unlikely I would need the full domain of input values available).
|
# ? Feb 23, 2022 00:25 |
|
|
# ? May 27, 2024 04:16 |
|
Here's an idea: instead of using all 64 bits as a fixed-point representation, we could use 53 bits as a fixed-point representation, and then use a few of the extra bits to tell you what the scale is and where the decimal point goes.
|
# ? Feb 23, 2022 00:30 |
|
leper khan posted:If I need the full domain of my fixed point set, accuracy would be impacted. This follows from the pigeon hole principle, since floats have multiple representations for the same value. The loss of generality has to go somewhere, especially since floats _also_ increase the range. The part where I said "same number of mantissa bits" is how I think you won't lose accuracy.
|
# ? Feb 23, 2022 00:32 |
|
Jabor posted:Here's an idea: instead of using all 64 bits as a fixed-point representation, we could use 53 bits as a fixed-point representation, and then use a few of the extra bits to tell you what the scale is and where the decimal point goes. :hmm.yes00000000000001:
|
# ? Feb 23, 2022 00:38 |
|
leper khan posted:floats have multiple representations for the same value.
|
# ? Feb 23, 2022 00:39 |
|
Volte posted:What? There's a canonical representation, and other representations. Play around with the mantissa and exponents, you should be able to get a reasonable number of representations for 2. There's also a pretty large number of representation of NaNs, assuming you don't care about the payload/etc.
|
# ? Feb 23, 2022 00:46 |
|
leper khan posted:There's a canonical representation, and other representations. Play around with the mantissa and exponents, you should be able to get a reasonable number of representations for 2. There's also a pretty large number of representation of NaNs, assuming you don't care about the payload/etc. 2 has exactly one representation: sign bit unset, exponent of 128, mantissa of 0.
|
# ? Feb 23, 2022 00:52 |
|
Jabor posted:2 has exactly one representation: sign bit unset, exponent of 128, mantissa of 0. gently caress you're going to make me look up some non canonical poo poo, aren't you
|
# ? Feb 23, 2022 00:55 |
|
1? Should be able to have any exponent value. E: really don't feel like breaking out a notebook or googling it
|
# ? Feb 23, 2022 01:00 |
|
leper khan posted:1? Should be able to have any exponent value. The most significant bit of the mantissa isn't actually stored, it's just always 1. A consequence of this is that the range of values represented by each exponent don't overlap at all. The only representation of 1 is the representation of 2 with the exponent reduced by 1.
|
# ? Feb 23, 2022 01:04 |
|
leper khan posted:1? Should be able to have any exponent value.
|
# ? Feb 23, 2022 01:05 |
|
to check if there's overlap in normal and subnormal values
|
# ? Feb 23, 2022 01:15 |
|
leper khan posted:to check if there's overlap in normal and subnormal values
|
# ? Feb 23, 2022 01:24 |
|
Volte posted:There isn't. Cool. Just the bunches of NaN and a few infinities and zeroes.
|
# ? Feb 23, 2022 01:27 |
|
Polio Vax Scene posted:we all float down here (whether you like it or not) I don't Microprocessor I'm working on doesn't have a FPU
|
# ? Feb 23, 2022 01:43 |
|
One thing related to the discussion I don't know which I hope someone else might: suppose you have a numerically-stable FP algorithm that you want to use on an input with n meaningful bits. How much underlying precision do you need in the FP type? Is it related to how well-conditioned the problem is? (Or, alternatively, if you have IEEE doubles/singles/whatever, what's the highest input precision you can handle?)
|
# ? Feb 23, 2022 02:17 |
|
It's almost as if domain experts had this same conversation nearly 40 years ago based on their experiences with earlier non-standardized floating point formats.
|
# ? Feb 23, 2022 02:37 |
|
leper khan posted:Cool. Just the bunches of NaN and a few infinities and zeroes. There's only two infinities, positive and negative. Exponent all 1s and significand all 0s is Infinity, exponent all 1s and significand anything else is NaN. The only two different things that are equal are zero and negative zero, and even then they're not two bit patterns representing the same number. You get different results if you divide by them.
|
# ? Feb 23, 2022 02:41 |
|
OddObserver posted:One thing related to the discussion I don't know which I hope someone else might: suppose you have a numerically-stable FP algorithm that you want to use on an input with n meaningful bits. How much underlying precision do you need in the FP type? Is it related to how well-conditioned the problem is? (Or, alternatively, if you have IEEE doubles/singles/whatever, what's the highest input precision you can handle?) Depends on the operations. Adds/subs are notably bad. https://floating-point-gui.de/errors/propagation/ https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
|
# ? Feb 23, 2022 02:43 |
|
Dylan16807 posted:There's only two infinities, positive and negative. Exponent all 1s and significand all 0s is Infinity, exponent all 1s and significand anything else is NaN. 0 and -0 and two representations for the number 0 hth
|
# ? Feb 23, 2022 02:45 |
|
languages shouldn't give you floating point operations by default. you should at least have to import them
|
# ? Feb 23, 2022 06:39 |
|
Matlab in the early-to-mid-2010s had an Int64 type but defined no operators for it, I find this approach to numerical computation very zen
|
# ? Feb 23, 2022 06:57 |
|
If you're talking about IEEE float operations being slow without talking about vectorization you're being ridiculous. If you're talking about IEEE floats having behavior that needs to be learned before they can be used effectively, you're right. I'm sure the solution to that is to write a "variable fixed point" implementation without having first understood the behavior of IEEE and the reasoning behind the design of the standard.
|
# ? Feb 23, 2022 07:35 |
|
more falafel please posted:If you're talking about IEEE float operations being slow without talking about vectorization you're being ridiculous. Mostly I was talking about floats to procrastinate writing some process docs. Worked really well.
|
# ? Feb 23, 2022 07:49 |
|
Floats are untenable. https://www.youtube.com/watch?v=7iHsjSHKT70
|
# ? Feb 23, 2022 08:59 |
|
more falafel please posted:If you're talking about IEEE float operations being slow without talking about vectorization you're being ridiculous. agreed, floats are too dangerous to use without proper knowledge. this is why they shouldn't be available out-of-the-box in plangs by default
|
# ? Feb 23, 2022 09:33 |
|
QuarkJets posted:Matlab in the early-to-mid-2010s had an Int64 type but defined no operators for it, I find this approach to numerical computation very zen IIRC there were operators defined for it, but only between an int and another int. Even today if you try to multiply a float and an int Matlab will stop you and ask wtf you think you are doing
|
# ? Feb 23, 2022 09:38 |
|
DoctorTristan posted:Even today if you try to multiply a float and an int Matlab will stop you and ask wtf you think you are doing This is the way. Implicit type casts are dangerous. Also, I suggest that people who want to know more about FP, already know a lot about FP, or long ago figured out that FP is bullshit and we should just use fixed point and who needs nonlinear functions anyway, to read some of William Kahans papers, presentations, and essays. A lot of it is dense material on numerical methods (but still readable relative to the genre), but there's also a lot about teaching floating point tricks and doing numbers properly. William Kahan seems to have spent the past thirty years being angry at people doing floating point wrong or spreading misinformation about it (I guess this is why Gustafson talked about the "Wrath of Kahan").
|
# ? Feb 23, 2022 10:01 |
|
Athas posted:This is the way. Implicit type casts are dangerous. I agree, but sometimes you run into a lovely language without generics, it defines Math.Max on float64 only, I end up writing Go code:
(I genuinely don't mind implicit type casts that keep full information, like int -> long, ieee float -> double, even though they can sometimes murder performance.)
|
# ? Feb 23, 2022 10:17 |
|
Xarn posted:(I genuinely don't mind implicit type casts that keep full information, like int -> long, ieee float -> double, even though they can sometimes murder performance.) I have spent too much of my life chasing down performance bugs because some C code dared to call sqrt(x) where x was a float, because sqrt() is defined for doubles, and so this involves a silent expansion to double precision and a full double precision square root. (The solution is to use sqrtf() instead.) Any amount of life spent chasing down this stuff is too much.
|
# ? Feb 23, 2022 10:20 |
|
leper khan posted:0 and -0 and two representations for the number 0 hth I think what +0 really means is "a number between 0 and 2 ^ -126", and -0 really means "a number between - (2 ^ -126) and 0". Or maybe -127 depending how you do rounding.
|
# ? Feb 23, 2022 11:06 |
|
CPColin posted:And clearly the person to solve the problem back in 1995 was a high school freshman armed with a highlighter. I took a computer graphics class in college in the late 90s and most of our textbook was devoted to deriving integer versions of various rendering algorithms. A lost art.
|
# ? Feb 23, 2022 13:46 |
|
Athas posted:This is the way. Implicit type casts are dangerous. I'm not sure if this will be healthy for me to read, but I'll probably do it anyway. Athas posted:I have spent too much of my life chasing down performance bugs because some C code dared to call sqrt(x) where x was a float, because sqrt() is defined for doubles, and so this involves a silent expansion to double precision and a full double precision square root. (The solution is to use sqrtf() instead.)
|
# ? Feb 23, 2022 13:57 |
|
I read the 81 pages of the java fp hurts everyone paper and it didn't make me want to not stan fixed point.
|
# ? Feb 23, 2022 14:37 |
|
smackfu posted:I took a computer graphics class in college in the late 90s and most of our textbook was devoted to deriving integer versions of various rendering algorithms. A lost art. What was the book?
|
# ? Feb 23, 2022 14:38 |
|
Not sure if it was the same book (cover looks different) but this is the basic idea: Practical Algorithms for 3D Computer Graphics By: R. Stuart Ferguson
|
# ? Feb 23, 2022 15:20 |
|
Relieved to see it wasn't written by said classmate
|
# ? Feb 23, 2022 16:51 |
|
I mean, if you want to do fancy 3D stuff targeting NES, Game Boy, etc (there is a growing (!?!?!) market for new games in reproduction carts), that can be useful to know.
|
# ? Feb 23, 2022 16:53 |
|
Absurd Alhazred posted:I mean, if you want to do fancy 3D stuff targeting NES, Game Boy, etc (there is a growing (!?!?!) market for new games in reproduction carts), that can be useful to know. The market is small, but it's possible to sell roughly the same volume as you're likely to get on steam/etc (and you should port there as well).
|
# ? Feb 23, 2022 17:04 |
|
|
# ? May 27, 2024 04:16 |
Athas posted:I have spent too much of my life chasing down performance bugs because some C code dared to call sqrt(x) where x was a float, because sqrt() is defined for doubles, and so this involves a silent expansion to double precision and a full double precision square root. (The solution is to use sqrtf() instead.) my first reaction was "well then the sqrt should check for float input and automatically call sqrtf and only do the expansion if explicitly requested in the call" but now I'm thinking it's probably better to do the opposite and live with hunting down the performance impact and not cause something to explode at NASA then again slow performance could also cause something to explode at NASA
|
|
# ? Feb 23, 2022 17:16 |