|
In Rust, the general solution is that you can elect to return Result<_,Box<Error>> where Box<Error> means it can be any error. This has some tradeoffs, though. Namely, you can't distinguish the type of the error anymore because it loses its metadata and just becomes a vtable to the Error implementation. Unfortunately, the other solutions (such as designing your own error type that's an enum of the possible errors that could be returned) lose you the use of the ? operator making it unwieldy again.
|
# ? Sep 25, 2017 00:41 |
|
|
# ? Jun 6, 2024 04:18 |
|
TheBlackVegetable posted:
The obvious solution then is to have some function of type E1 -> E2 so you can embed one error space inside the other. But it's really not clear to me how to embed a DivisionByZeroError inside the error space of, say, your built-in library function that opens a file.
|
# ? Sep 25, 2017 01:21 |
Linear Zoetrope posted:Unfortunately, the other solutions (such as designing your own error type that's an enum of the possible errors that could be returned) lose you the use of the ? operator making it unwieldy again. I thought you could still use the ? operator if you implement From<E> for your error enum, where E is the wrapped error?
|
|
# ? Sep 25, 2017 02:36 |
|
vOv posted:The obvious solution then is to have some function of type E1 -> E2 so you can embed one error space inside the other. But it's really not clear to me how to embed a DivisionByZeroError inside the error space of, say, your built-in library function that opens a file. I think that discordance highlights the real issue. We shouldn't be using the same system to handle true faults (cannot allocate any more memory) as perfectly normal conditions (trying to parse a string to int that contains letters).
|
# ? Sep 25, 2017 03:18 |
|
KernelSlanders posted:I think that discordance highlights the real issue. We shouldn't be using the same system to handle true faults (cannot allocate any more memory) as perfectly normal conditions (trying to parse a string to int that contains letters). Right, that's what I'm arguing.
|
# ? Sep 25, 2017 04:36 |
|
hackbunny posted:Don't handle CPU faults as exceptions unless you want to add a lot of overhead to exceptions. I assume you wouldn't suggest handling them as part of a Result type, which was the alternative under discussion. Actually, what would you suggest, and what alternatives have been used? I've never worked with signals in a language that did it any other way.
|
# ? Sep 25, 2017 05:55 |
|
Absurd Alhazred posted:0x10000000 would be a slightly better candidate for signed integers; no problem with 0xFFFFFFFF for unsigned integers, +1 from that is overflow anyway. What unholy significance are you assigning to bit 28 here? Surely you meant 0x80000000?
|
# ? Sep 25, 2017 17:58 |
|
Don't mind me, I'll just be over here rewriting C's usual arithmetic conversion rules to make sizeof(int) + (-1) not fail catastrophically.
|
# ? Sep 25, 2017 20:32 |
|
How did a discussion about handling division errors in an ergonomic way for developers turn into this?
|
# ? Sep 25, 2017 21:03 |
|
Coffee Mugshot posted:How did a discussion about handling division errors in an ergonomic way for developers turn into this? Christ, this is the entire point behind zero-cost abstractions. You don't get the context served up on a platter, if you need to know how we got here expect to unwind it yourself. Prepare metadata accordingly.
|
# ? Sep 25, 2017 22:18 |
JawnV6 posted:Christ, this is the entire point behind zero-cost abstractions. You don't get the context served up on a platter, if you need to know how we got here expect to unwind it yourself. Prepare metadata accordingly.
|
|
# ? Sep 25, 2017 22:23 |
|
That's pretty good but I still think that's more of an API problem than a language problem
|
# ? Sep 25, 2017 22:56 |
|
JawnV6 posted:What unholy significance are you assigning to bit 28 here? Surely you meant 0x80000000? That's what I meant. Coffee Mugshot posted:That's pretty good but I still think that's more of an API problem than a language problem Yeah, we should have encoded version and structure type into the first 64-bit word to ensure forward compatibility.
|
# ? Sep 26, 2017 01:47 |
|
Doc Hawkins posted:I assume you wouldn't suggest handling them as part of a Result type, which was the alternative under discussion. This is, or used to be, one of my areas of greatest expertise. We need to make a very important distinction first: handling faults vs catching faults Take a relatively common example of user-mode handling of CPU faults: on-demand memory allocation. Let's say you reserve a large buffer in memory as a range of unallocated memory pages: as soon as code attempts to write to the buffer, the CPU faults on the missing page, and calls the user mode handler; for simplicity, let's assume it's the SIGSEGV handler in a C program. In the signal handler, the program looks at the faulting address (si_addr field of the signal information structure), and if it's within the range of the buffer, it allocates the page that was missing (let's say, with mmap), and resumes execution. This is an example of handling a fault: the faulting code doesn't even realize a fault occurred Catching, on the other hand, is what you need for unrecoverable errors - for example, a division by zero, which can't be "fixed" in any way by the SIGFPE handler - when you don't want to terminate the whole process. You can do this on Windows (... and OS/2, and - !!! - Tru64 UNIX), with the non-standard __try/__except construct, which lets you catch CPU faults like they were language exceptions. Ideally, in Rust you'd add an equivalent construct that can capture the fault in a Result. But this is easier said than done... Think about it: what does the fault catcher know about the code that faulted, except that somewhere down the stack a division by zero occurred? Absolutely nothing. The faulting code might have been in the middle of a larger operation, and by interrupting it, effectively forcing a recursive return statement at that point, you might have left some data in an undefined state. Sure, you have the __try/__finally statement, to clean up even when the code is interrupted by a fault, but to be on the safe side, you have to do all clean up with __try/__finally, in your entire program, external libraries and operating system binaries included. And do you know what happens to all code inside a __try? it must be compiled with the assumption that every single instruction can do the equivalent of throwing an exception, even NOPs (what if a fault occurs while fetching the NOP instruction itself?). In languages that implicitly add clean-up code, like C++ (see the RAII idiom), any function with implicit clean-up must be compiled like it was wrapped in a single big __try/__finally, or a fault might cause clean-up code to be skipped or - possibly even worse - be performed on not fully initialized data (in fact, it's more like several nested __try/__finally statements, one for each variable that needs to be destroyed). This interferes with compilers and especially optimizers so much that, even when it's supported, it's disabled by default: on gcc, you need to compile with -fasynchronous-exceptions; on Visual C++, with the equivalent /EHa; clang outright doesn't support it as it's completely unsupported by the backend A function that can catch CPU faults in a Result would have to be explicitly marked as such, and it would have to ignore any faults raised by functions that weren't marked as fault-safe (or worse, raised by foreign code altogether), and let them crash the program. I'm unfamiliar with Rust but I don't see its strictness playing well with structured fault handling at all hackbunny fucked around with this message at 13:56 on Sep 26, 2017 |
# ? Sep 26, 2017 13:48 |
|
Rust just emits asserts against zero before all divisions.
|
# ? Sep 27, 2017 01:11 |
|
I prefer all my all my runtime errors to result in a full kernel panic, no possibility of lazy catching there
|
# ? Sep 27, 2017 21:38 |
|
I use a custom kernel & hardware controller that fills my computer with jam whenever it encounters a runtime error
|
# ? Sep 27, 2017 21:41 |
|
Casual reminder of why git is a flawless VCS http://caiustheory.com/git-git-git-git-git/
|
# ? Sep 27, 2017 22:33 |
|
Coffee Mugshot posted:Casual reminder of why git is a flawless VCS http://caiustheory.com/git-git-git-git-git/ Too many gits, too many gits... Too many gits, toooo maannny giiiittss...
|
# ? Sep 28, 2017 02:11 |
|
vOv posted:'this program needs to die now' in case you do something like run out of memory QuarkJets posted:The fact that Matlab actually does the opposite of that (upcasting the operation and then downcasting the result to the smallest of the two precisions) is unreasonable and frankly just bizarre
|
# ? Sep 28, 2017 14:04 |
|
PhantomOfTheCopier posted:The fact that a math focused application ensures results introduce no false precision is entirely reasonable and good. If x=pi is a single, y=pi is a double, then x+y can only ever be 2pi to the precision of a single. Returning x+y as a double is just wrong. Suppose we have a (double-precision) value of 2^53 units, and we add (single-precision) 2 to that. How precisely do we know the result? Similarly, suppose we have two (double-precision) values of 2^53 and 2^53+2. How precisely do we know the difference between them?
|
# ? Sep 28, 2017 14:12 |
|
Post the application in which that arises and you'll have supplied the thread with the coding horror. lol Pluto's perturbations on Earth's tides are getting lost in a double.
|
# ? Sep 28, 2017 15:00 |
|
Oh you're right, I can't imagine why anyone would ever care about small differences between large numbers. That must be some kind of epic coding horror and not a highly useful physical experiment at all!
|
# ? Sep 28, 2017 16:17 |
|
Jabor posted:Oh you're right, I can't imagine why anyone would ever care about small differences between large numbers. That must be some kind of epic coding horror and not a highly useful physical experiment at all! On the other hand if you look at the actual data: http://polac.obspm.fr/llrdatae.html you'll note that the actual data looks like: code:
|
# ? Sep 28, 2017 17:22 |
|
code:
|
# ? Sep 28, 2017 19:01 |
|
CPColin posted:
okay, fixed it
|
# ? Sep 28, 2017 19:45 |
|
Whew!
|
# ? Sep 28, 2017 19:57 |
|
PhantomOfTheCopier posted:The fact that a math focused application ensures results introduce no false precision is entirely reasonable and good. If x=pi is a single, y=pi is a double, then x+y can only ever be 2pi to the precision of a single. Returning x+y as a double is just wrong. Clearly, we should represent all real values as lazy infinite continued-fraction lists.
|
# ? Sep 28, 2017 20:50 |
|
PhantomOfTheCopier posted:The fact that a math focused application ensures results introduce no false precision is entirely reasonable and good. If x=pi is a single, y=pi is a double, then x+y can only ever be 2pi to the precision of a single. Returning x+y as a double is just wrong. If an instrument records the number 1 as an 8-bit integer, by your logic any math that I do with that result should be restricted to 0-255. That's some broke-brain bullshit. e: If you want your math- and science- focused language to be overly concerned with numerical precision based on the number of bits storing results, then you should spit out an error when two different precision levels clash; that lets the user decide whether they actually care about the numerical precision of all of the variables in that operation. Silently downcasting the result is the worst, least reasonable option. Downcasting while also spitting out a warning would at least tell the user that something has happened that they may not expect (since this behavior is counter-intuitive in all ways), but Matlab doesn't do that. QuarkJets fucked around with this message at 21:25 on Sep 28, 2017 |
# ? Sep 28, 2017 21:18 |
|
PhantomOfTheCopier posted:Clearly the working set sheriff should present the system administrator with a prompt to add more memory to the system and refuse to continue operations until the modules have been inserted.
|
# ? Sep 28, 2017 21:32 |
|
Is there a name for variables whose types keep getting changed over the course of a program written in a dynamic language? If not, I want to call them "skinwalker variables." First it's a class type, then it's a dictionary, and later it breaks into your cabin and eats your face.
|
# ? Sep 28, 2017 23:30 |
|
[quote="“PhantomOfTheCopier”" post="“476848009”"] The fact that a math focused application ensures results introduce no false precision is entirely reasonable and good. If x=pi is a single, y=pi is a double, then x+y can only ever be 2pi to the precision of a single. Returning x+y as a double is just wrong. [/quote] Sorry but this is crazy and you are a crazy person. It's useful for intermediate calculations to use excess precision.
|
# ? Sep 29, 2017 00:01 |
|
You people sure have some strange reading problems. Matlab is an application. It is perfectly reasonable for it to return calculations that adhere most closely to standard practice of significant digits. Nothing prevents and nothing I wrote prevents intermediate and internal calculation to be handled differently. Real number results are handled differently than integers (which I said nothing about). Nothing prevents an idiot with a computer programming language from specifically requesting the wrong value for 2pi. I'm sure some legislator would have been happy to have such an app. As to the bot poster who replies immediately to every word I write, thank you for being a follower but a straw man argument simply will not do. Differences between large and small numbers are very important, and this I acknowledged in my original comment (which you apparently failed to actually read). Anyone who writes code to handle those situations, including moon mirror fluctuations, assuming that a double in a standard statically typed computer programming language (which is NOT a mathematics application, so different than what was being discussed) is sufficient for their needs will have their crappy code posted here. Now I know why people like agile development; it's because they couldn't possibly write code based on a specification they'd have to read.
|
# ? Sep 29, 2017 00:40 |
|
PhantomOfTheCopier posted:You people sure have some strange reading problems. Matlab is an application. It is perfectly reasonable for it to return calculations that adhere most closely to standard practice of significant digits. Nothing prevents and nothing I wrote prevents intermediate and internal calculation to be handled differently. Real number results are handled differently than integers (which I said nothing about). 2^30, as a double, has an absolute precision of ±2^-23. 4, as a single, has an absolute precision of ±2^-21. And yet if you add them together, you think that an absolute precision of ±2^7 is reasonable? That's ultimately the point, that floating-point size has no correlation with the absolute precision of a measurement or result. Trying to justify rear end-backwards behaviour by claiming that it enforces correct precision is completely laughable. PhantomOfTheCopier posted:As to the bot poster who replies immediately to every word I write, Get over yourself.
|
# ? Sep 29, 2017 01:00 |
|
PhantomOfTheCopier posted:thank you for being a follower but a straw man argument simply will not do. What about an ad hominem, then? Your posting is bad, lol.
|
# ? Sep 29, 2017 03:50 |
PhantomOfTheCopier posted:You people sure have some strange reading problems. Matlab is an application. It is perfectly reasonable for it to return calculations that adhere most closely to standard practice of significant digits. Nothing prevents and nothing I wrote prevents intermediate and internal calculation to be handled differently. Real number results are handled differently than integers (which I said nothing about). Nothing prevents an idiot with a computer programming language from specifically requesting the wrong value for 2pi. I'm sure some legislator would have been happy to have such an app. you're real dumb, hth
|
|
# ? Sep 29, 2017 04:42 |
|
Coding horror more like coder horror
|
# ? Sep 29, 2017 04:49 |
|
Let us never lose sight of the fact that Matlab is a terrible, awful programming language that only survives because it has so goddamn many scientific/mathematical libraries bolted onto it. I suppose the takeaway lesson is that having a strong standard library does a lot to make a language attractive, and conversely, that languages with sparse standard libraries are not very appealing.
|
# ? Sep 29, 2017 05:19 |
|
TooMuchAbstraction posted:I suppose the takeaway lesson is that having a strong standard library does a lot to make a language attractive, and conversely, that languages with sparse standard libraries are not very appealing. unless it's javascript
|
# ? Sep 29, 2017 06:47 |
|
|
# ? Jun 6, 2024 04:18 |
|
Plorkyeran posted:unless it's javascript
|
# ? Sep 29, 2017 08:11 |