|
Baby Babbeh posted:That... makes sense. I was overcomplicating this. It returns another df rather than changing it in place, right? As Cingulate mentioned, that would be an in-place function call. But I thought I would chime in to show how to do it as a function returning a new df: Python code:
Anyways, DataFrame.where (or more commonly for me Series.where) is one of the most common operations when I'm using method chaining.
|
# ? Jun 9, 2017 17:39 |
|
|
# ? May 15, 2024 03:58 |
|
Interesting, I literally never use .where. Have to look into that. Also ing the py3redirect
|
# ? Jun 9, 2017 18:59 |
|
While we are on the subject of documentation, I also use http://devdocs.io from time to time. You can specify what documentation and version you want to search, and it even downloads the documentation for offline use.
|
# ? Jun 9, 2017 22:46 |
|
accipter posted:You can with py3redirect!
|
# ? Jun 9, 2017 23:59 |
|
I'm trying to do something with a counter where you can poll something for x time with a y interval. Seems pretty straightforward, I had a bug in it so the while condition never got met (variable wasn't an integer but a string ). So I tested if my fix would work and I ran into this. For this project it doesn't really matter but I'm pretty curious where the 0.00000000000000004 comes from in the 3rd iteration and why it only increased 4.6 to 4.699999999999999. code:
|
# ? Jun 10, 2017 10:43 |
|
Obviously there's something wrong with your megahurts, so you'll need a new processing unit. It's because of floating point numbers: http://floating-point-gui.de e: You'll want to change the logic of that loop so that it counts by integers instead of floats. When you do the division part your integers are turning into floats. e2: The overly simplified gist of it is that your computer does not know how to represent a value like 0.3 exactly, so it needs to approximate. That's what that extra 0.0...4 is coming from, and that's why some of your other values are actually less than what you are expecting. This is why the general wisdom is to never compare floating point numbers if the result is critical, e.g.: code:
Boris Galerkin fucked around with this message at 11:44 on Jun 10, 2017 |
# ? Jun 10, 2017 11:24 |
|
Boris Galerkin posted:Obviously there's something wrong with your megahurts, so you'll need a new processing unit. Never knew floating points behave like this, learning something knew everyday.
|
# ? Jun 10, 2017 11:36 |
|
LochNessMonster posted:Never knew floating points behave like this, learning something knew everyday. It's more of a hardware limitation than an actual thing. In real life 0.3 is exactly 0.3, but we lose some accuracy when we represent it in a computer. I imagine for most people none of this has any direct relevance, but if you're doing any sort of numerical/computational work then this type of stuff comes up all the time. Doing simulations for example we already accept that there is an inherent error from the actual math equations and simplifications to them, but we also accept that there are errors from discretization (dividing the problem up into multiple smaller parts), and we also must be aware of problems in roundoff/precision due to the computer. Lots of fields use 64 bit floats for example because it gives us more precision (more decimals). I remember one of my earliest courses in this field our professor intentionally misled us to use MATLAB to write our first finite difference solver to solve a problem that produced nonsense results because the default floating point precision in MATLAB (at that time? not sure if it's the case still) was 32bit. Due to error propagation these errors eventually ruined the solution. Telling MATLAB (or using C/Fortran double precision numbers) to use 64 bit reals on the other hand fixed the problem because these errors never had a chance to propagate. Boris Galerkin fucked around with this message at 12:03 on Jun 10, 2017 |
# ? Jun 10, 2017 11:59 |
|
LochNessMonster posted:Never knew floating points behave like this, learning something knew everyday. Did u think that 64 bits was enough to represent all of the reals?
|
# ? Jun 10, 2017 12:26 |
|
Boris Galerkin posted:It's more of a hardware limitation than an actual thing. In real life 0.3 is exactly 0.3, but we lose some accuracy when we represent it in a computer. Look at this scrub who doesn't compensate for floating pt errors and is agnostic to precision. Mixed precision iterative refinement
|
# ? Jun 10, 2017 12:28 |
|
Next y'all will be like "use doubles" or maybe even "use quads" and then come back like "my prog is mega slow"
|
# ? Jun 10, 2017 12:31 |
|
Just use the Decimal class
|
# ? Jun 10, 2017 14:39 |
|
Malcolm XML posted:
I didn't think about this at all, nor do I know what anything in your sentence means. Ignorance is bliss in some cases.
|
# ? Jun 10, 2017 18:04 |
|
Malcolm XML posted:
That has nothing to do with floating point errors though; exact Decimal types exist without having to represent all of the reals
|
# ? Jun 11, 2017 01:38 |
|
Lol if u have to think about the 64th decimal place in your code just lol
|
# ? Jun 11, 2017 01:45 |
|
QuarkJets posted:That has nothing to do with floating point errors though; exact Decimal types exist without having to represent all of the reals but the point is that no finite representation can represent real numbers w/o some loss and imprecision, fixed point decimal does it differently than IEEE-754 arbitrary precision is a different game but still limited by practicality
|
# ? Jun 11, 2017 01:46 |
|
Boris Galerkin posted:It's more of a hardware limitation than an actual thing. In real life 0.3 is exactly 0.3, but we lose some accuracy when we represent it in a computer. I don't think that it's accurate to call that a hardware limitation, otherwise Decimal couldn't exist
|
# ? Jun 11, 2017 01:48 |
|
Malcolm XML posted:
Malcolm XML is a smart guy but this post is dumb as gently caress. I mean, people start programming and with some experience assume that "float" is just a fancy name for numbers that intuitively behave like fixed-point numbers. This reasoning deflects the fact that real numbers are infinite. I don't think many introductory materials go into teaching the details of floating points, it's a somewhat low level thing. But it's too bad, because imo every introductory material should tackle a bit into the classic 0.1 + 0.2 = 0.30000000000000004 to clear out people's minds from intuitively assuming they're fixed-point.
|
# ? Jun 11, 2017 01:50 |
|
Malcolm XML posted:but the point is that no finite representation can represent real numbers w/o some loss and imprecision, fixed point decimal does it differently than IEEE-754 And I agree with that, just not the vaguer post that you made earlier (because you can crank up the precision and use clever software to circumvent many of the common problems in floating point arithmetic without having to represent all real numbers; for instance financial software can use the Decimal type instead of float, but Decimal itself does not represent all reals with arbitrary precision)
|
# ? Jun 11, 2017 02:11 |
|
I was lost on the floating point topic until I found this on quora quote:Most answers here address this question in very dry, technical terms. I'd like to address this in terms that normal human beings can understand.
|
# ? Jun 11, 2017 02:29 |
|
It's like with decimal, you can do however many tenths and hundredths and so on you like, but there are some numbers you just can't represent perfectly, like 1/3. You can get close, and the more decimal places you have the less it matters, but eventually you run out and have to settle for 0.3333333, which is as accurate as you're gonna get in base 10 Base 2 has similar issues, just with different numbers - your fractions have to have a denominator that's a power of 2 instead of a power of 10, and that's a no-go way more often
|
# ? Jun 11, 2017 03:14 |
|
funny Star Wars parody posted:Lol if u have to think about the 64th decimal place in your code just lol lmao if you don't understand every abstraction layer right down to how the rock we shoot lightning into to do numbers does stuff
|
# ? Jun 11, 2017 04:12 |
|
When you get down to the low levels and it's all discrete values like energy quanta... can some numbers even exist? #whoa I say we ban these false numbers
|
# ? Jun 11, 2017 04:45 |
|
what if quantum computer creates a black hole right in the middle of silicon valley? it'd be a net positive
|
# ? Jun 11, 2017 05:26 |
|
Malcolm XML posted:
|
# ? Jun 11, 2017 14:13 |
|
Your avatar is ridiculous.
|
# ? Jun 11, 2017 15:00 |
|
funny Star Wars parody posted:Lol if u have to think about the 64th decimal place in your code just lol Dex posted:lmao if you don't understand every abstraction layer right down to how the rock we shoot lightning into to do numbers does stuff Not everyone in this thread builds web apps.
|
# ? Jun 11, 2017 16:30 |
|
nobody mentioned web apps
|
# ? Jun 11, 2017 16:58 |
|
KernelSlanders posted:Not everyone in this thread builds web apps. This thread has always been open to beginners. The poster who has been asking questions is obviously a beginner. Floating point stuff obviously is weird to beginners. Thermopyle fucked around with this message at 17:44 on Jun 11, 2017 |
# ? Jun 11, 2017 17:21 |
|
KernelSlanders posted:Not everyone in this thread builds web apps.
|
# ? Jun 11, 2017 18:15 |
Thermopyle posted:This thread has always been open to beginners. The poster who has been asking questions is obviously a beginner. Floating point stuff obviously is weird to beginners. Numerical Analysis is not an easy subject and it frustrates me when people pretend it's simple & intuitive.
|
|
# ? Jun 11, 2017 19:21 |
|
Eela6 posted:Numerical Analysis is not an easy subject and it frustrates me when people pretend it's simple & intuitive. In the vast (vast) majority of cases they are in fact very simple and intuitive. It's extremely rare to need a mental model of floating point that is more sophisticated than "they're like real numbers except that they get funny when they're really big or really small" even if you work with them every day.
|
# ? Jun 11, 2017 19:41 |
Nippashish posted:In the vast (vast) majority of cases they are in fact very simple and intuitive. It's extremely rare to need a mental model of floating point that is more sophisticated than "they're like real numbers except that they get funny when they're really big or really small" even if you work with them every day. Python code:
|
|
# ? Jun 11, 2017 19:45 |
|
Python code:
|
# ? Jun 11, 2017 19:52 |
|
Nippashish posted:
That's why its confusing to beginners.
|
# ? Jun 11, 2017 19:53 |
|
Thermopyle posted:That's why its confusing to beginners. That's why I'm suggesting a very simple mental model that is intuitive and also sufficient for even non-beginners. Teaching people that floating point numbers are dark and spooky and complicated isn't very productive, because very few people need to care about that level of detail. I'm responding to "Numerical Analysis is not an easy subject and it frustrates me when people pretend it's simple & intuitive" by pointing out that for most practical purposes it can be made to be exactly that.
|
# ? Jun 11, 2017 19:58 |
Nippashish posted:That's why I'm suggesting a very simple mental model that is intuitive and also sufficient for even non-beginners. Teaching people that floating point numbers are dark and spooky and complicated isn't very productive, because very few people need to care about that level of detail. I'm not against teaching people rules of thumb, but pretending that they're more than just rules of thumb is dumb. An important part of being a professional is knowing the limits of your knowledge.
|
|
# ? Jun 11, 2017 20:05 |
|
Nippashish posted:That's why I'm suggesting a very simple mental model that is intuitive and also sufficient for even non-beginners. Teaching people that floating point numbers are dark and spooky and complicated isn't very productive, because very few people need to care about that level of detail. As the root cause of this discussion as well as a beginner in terms of programming I can tell you that you're over simplifying things. Before this discussion I had no clue that floats would give irregular results. I was trying to do some really basic math functions and they were off by 10-20%. If I'm running into that in my 2nd project with basic funtionality I'd say it's defintely something almost everyone will care about rather sooner than later. I'd rather have people tell me I should watch out with where I'm using floats (like they did, thank you) than tell me not to worry because I don't need to care about that level of detail.
|
# ? Jun 11, 2017 20:10 |
As a note, the fact that approximate equality is often useful when comparing floats comes up often enough that isclose was added to the math and cmath modules in Python 3.5. Python code:
|
|
# ? Jun 11, 2017 20:16 |
|
|
# ? May 15, 2024 03:58 |
|
LochNessMonster posted:If I'm running into that in my 2nd project with basic funtionality I'd say it's defintely something almost everyone will care about rather sooner than later. Floating point numbers are not real numbers, and you need to be aware of this when using them. The particular way in which they are not real numbers is quite complicated, and is rarely relevant. I offered some additional guidance on how to think about the relationship between floats and reals. Floating point weirdness is also one of those topics that programmers love to expand on at length when it comes up (I'm slightly surprised no one has warned you about storing currency in floats yet) and my suggestion to "not worry about it" should be taken in the context of your post triggering a page of responses. The fact that you went away from the discussion with the impression that you should "watch out" when using floats is exactly what I was trying to mitigate.
|
# ? Jun 11, 2017 20:53 |