|
Subjunctive posted:Perl can be 1-based indexing, but, you know, it’s up to you same with visual basic
|
# ? Jan 17, 2024 22:39 |
|
|
# ? Jun 12, 2024 12:43 |
|
VikingofRock posted:Clearly the solution is to use Haskell's Arrays, where you decide at construction time how to index the arrays. You can do 0-indexed, 1-indexed, 2-indexed, or even tuple-indexed (great for mapping 2D matrices onto contiguous memory)! Those are in practice row major, because they inherit the lexicographical ordering of tuples. They also don't support things like slices. Haskell has some decent array libraries, but Data.Array feels like it was designed by someone who had never actually written an array program.
|
# ? Jan 18, 2024 00:31 |
|
array program, you mean blas bindings?
|
# ? Jan 18, 2024 00:49 |
|
Nomnom Cookie posted:array program, you mean blas bindings?
|
# ? Jan 18, 2024 00:52 |
|
Python: Good Rust: Good Python + Rust: Great
|
# ? Jan 18, 2024 00:55 |
|
VikingofRock posted:Clearly the solution is to use Haskell's Arrays, where you decide at construction time how to index the arrays. You can do 0-indexed, 1-indexed, 2-indexed, or even tuple-indexed (great for mapping 2D matrices onto contiguous memory)! this.
|
# ? Jan 18, 2024 03:04 |
|
Arcteryx Anarchist posted:keep that poo poo in wisconsin
|
# ? Jan 18, 2024 09:25 |
|
Nomnom Cookie posted:array program, you mean APL?
|
# ? Jan 18, 2024 09:28 |
|
Nomnom Cookie posted:array program, you mean K?
|
# ? Jan 18, 2024 18:08 |
|
Nomnom Cookie posted:array program, you mean go gently caress myself?
|
# ? Jan 18, 2024 18:44 |
|
prisoner of waffles posted:many are the crimes of matlab, including cells (vectors, matrices, or tensors of pointers) and “every N-dimensional matrix is also a column-vector.” MATLAB OOP also seems totally gross. polars fuckin' slaps
|
# ? Jan 19, 2024 22:20 |
|
Tayter Swift posted:polars fuckin' slaps maybe MATLAB broke my brain, but I really like writing my matrix manipulations as functions and operators, not methods on matrix objects— and I presume polars is like pandas: functions and methods
|
# ? Jan 20, 2024 02:22 |
|
i think the trouble is that dataframe != matrix
|
# ? Jan 20, 2024 03:59 |
|
if you just want linalg use numpy/scipy
|
# ? Jan 20, 2024 04:00 |
|
apropos of chat about MATLAB, Julia, etc., I am just now reading Nick Trefethen's "An Applied Mathematician's Apology" and getting some vibes. The author is the head of the Numerical Analysis Group at Oxford. Roughly speaking, what gets called applied mathematics or numerical analysis is *actually computing* answers to problems with *continuous* numbers or *continuous* structures; CS is *actually computing* but almost entirely *discrete structures*; mathematics at large is almost entirely *proving-but-not-computing* with continuous and/or discrete structures. My general vibe: applied mathematics is not as prestigious as either mathematics or CS. One thing that he points out is that many applied mathematicians take their work in the field towards the proving-but-not-computing territory, e.g., Chebyshev points aren't optimal for approximating a function, proving which points *are* optimal is a neat paper to write, but at the practical computational level the optimal points are largely not enough of an improvement to bother with. The (rather MATLAB-like) language chebfun, which I had never heard of before, grew out of his research group and seems interesting. Uhhh, to get back on topic: maybe MATLAB etc. are weird because
so no wonder that MATLAB, Julia etc. are so janky.
|
# ? Jan 20, 2024 05:06 |
|
Arcteryx Anarchist posted:i think the trouble is that dataframe != matrix hmm. sorry, I would still like to manipulate both my dataframes and my matrices with operators and functions, not functions and methods. it's purely a stylistic thing, but it just feels super gross to me when a manipulation that feels akin to some tidy mathematical operation (project full dataframe down to these columns, apply this function to them, aggregate) is done as dataFrame[:, columns].apply(f).aggregate(g) instead of g(f(dataFrame[:, columns])). prisoner of waffles fucked around with this message at 05:11 on Jan 20, 2024 |
# ? Jan 20, 2024 05:07 |
|
my coworker that left not too long ago was an applied math/numerical analysis guy and my favorite thing was finding an old photo of him on a university website just because he was so much younger in it that it was funny
|
# ? Jan 20, 2024 05:11 |
|
prisoner of waffles posted:hmm. sorry, I would still like to manipulate both my dataframes and my matrices with operators and functions, not functions and methods. sorry too hear that
|
# ? Jan 20, 2024 05:11 |
|
prisoner of waffles posted:Uhhh, to get back on topic: maybe MATLAB etc. are weird because So maybe I'm misunderstanding what you're saying, but: I don't think that applied mathematicians make the best PL designers or compiler devs. I think people who have experience in those fields do. It's important to note that being good and well-designed aren't really related to adoption and usage - just look at the popularity of javascript if you need an example how not.
|
# ? Jan 20, 2024 05:35 |
|
JawnV6 posted:having a hard time reading the python complaints from a few pages back as someone who used to get paid to maintain perl written by EE's we do this a lot and call it reflection
|
# ? Jan 20, 2024 06:02 |
|
(it’s rly fuckin bad lol)
|
# ? Jan 20, 2024 06:03 |
|
i don’t think anyone who’s done much numerical analysis thinks programming languages have much to offer the situation. my understanding is that general techniques for tracking precision through computation trees tend to wildly underestimate it: after a million operations, the supposed imprecision accumulates until the error bars are orders of magnitude larger than the result, when actually the computation is more stable than the general technique gives it credit for. even getting a real handle on the distribution of the result of computation requires human intelligence
|
# ? Jan 20, 2024 06:04 |
|
Visions of Valerie posted:So maybe I'm misunderstanding what you're saying, but: I don't think that applied mathematicians make the best PL designers or compiler devs. I think people who have experience in those fields do. It's important to note that being good and well-designed aren't really related to adoption and usage - just look at the popularity of javascript if you need an example how not. I think there are two opposing factors: - as you say, applied mathematicians are mostly not going to be PL designers or compiler devs; - applied mathematicians want to make tools that are good for their distinct classes of problems. The first causes jankiness in languages like MATLAB, at the implementation and (let's call it) specification level. The second causes a generally good fit between MATLAB and the class of problems at which it is aimed.
|
# ? Jan 20, 2024 06:26 |
|
rjmccall posted:i don’t think anyone who’s done much numerical analysis thinks programming languages have much to offer the situation. my understanding is that general techniques for tracking precision through computation trees tend to wildly underestimate it: after a million operations, the supposed imprecision accumulates until the error bars are orders of magnitude larger than the result, when actually the computation is more stable than the general technique gives it credit for. even getting a real handle on the distribution of the result of computation requires human intelligence it would be nice if you could go like #significant_digits 2 and then its all taken care of for you all through the program plus, people would use it even more for money and i love seeing the weird things that happen in that space (except for one thousand innocent people going to jail, but you know) Carthag Tuek fucked around with this message at 12:05 on Jan 20, 2024 |
# ? Jan 20, 2024 12:03 |
|
Carthag Tuek posted:it would be nice if you could go like #significant_digits 2 and then its all taken care of for you all through the program How would that avoid roundoff error? It just makes it a different roundoff error.
|
# ? Jan 20, 2024 12:44 |
|
Athas posted:How would that avoid roundoff error? It just makes it a different roundoff error. its called "punting the problem" sorry boss, we only have two digits
|
# ? Jan 20, 2024 13:13 |
|
prisoner of waffles posted:apropos of chat about MATLAB, Julia, etc., I am just now reading Nick Trefethen's "An Applied Mathematician's Apology" and getting some vibes. The author is the head of the Numerical Analysis Group at Oxford. I've never read that of his but I have read his book Approximation Theory and Approximation Practice and I cannot recommend it enough to anyone with a vague interest in how to deal with continuous functions, computationally. the guys enthusiasm for the subject comes through incredibly well, the style is conversational, its easy to sit down and implement your own toy chebfun after, going between function samples and chebyshev polynomial coefficients feels like magic, just a really good book, love it. lecture videos on his website i think somewhere aswell
|
# ? Jan 20, 2024 14:06 |
|
Later on in his “Apology of an Applied Mathematician”, he tells how he met both the founders of Mathworks, placed an order for 10 copies of the first version of MATLAB, and was in fact the first person to purchase MATLAB. Mathworks later gave him a plaque celebrating this.
|
# ? Jan 20, 2024 17:50 |
|
as an experimental physicist, i unironically love matlab. it is great at doing stuff with arrays of numbers, which is 90% of all my coding, and the plotting beats anything else (you can copy and paste curves from one figure to another, which is insanely useful). now though everything is switching to python with numpy, and i can't shake the feeling that it's all just knockoff matlab, where the numerical array stuff is clearly an addon as opposed to the core aspect of the language. like you have to wrap everything in np.array() to let the language know that you might like to do some math with these numbers. for anything *not* array and matrix math then for sure python wins, but that kind of coding is genuinely rare for a lot of sciency type people my latest love is julia though, which really clearly *gets* numerical processing (absolutely any function can be broadcasted a set of arrays by adding a dot!), and actually runs fast to boot. i'm not too optimistic about there about enough momentum behind it to ever really go anywhere though,
|
# ? Jan 20, 2024 19:35 |
|
python has “enough” scientific support and a huge library so its momentum is strong
|
# ? Jan 20, 2024 19:42 |
|
chatt posted:as an experimental physicist, i unironically love matlab. it is great at doing stuff with arrays of numbers, which is 90% of all my coding, and the plotting beats anything else (you can copy and paste curves from one figure to another, which is insanely useful). now though everything is switching to python with numpy, and i can't shake the feeling that it's all just knockoff matlab, where the numerical array stuff is clearly an addon as opposed to the core aspect of the language. like you have to wrap everything in np.array() to let the language know that you might like to do some math with these numbers. for anything *not* array and matrix math then for sure python wins, but that kind of coding is genuinely rare for a lot of sciency type people you don't "let the language know" - python is a general-purpose programming language, it only has primitives like integers, floats, strings, lists, etc. numpy has done a lot of work to make it possible to write python that feels like matlab or R, but a numpy array is just an object from python's point of view, one that so happens to define a special behavior for "what happens when you write array1 * array2" fundamentally it's just built on the facilities that python makes available for any type of object - if you felt deranged enough, you could define what happens when you divide an HTTP request by a datetime or w/e.
|
# ? Jan 20, 2024 21:26 |
|
numpy is the fortran monad implemented for Python and it’s much more elegant in haskell where you also get free concurrency and a dot product operator
|
# ? Jan 20, 2024 21:40 |
|
Quebec Bagnet posted:if you felt deranged enough, you could define what happens when you divide an HTTP request by a datetime or w/e. https://en.cppreference.com/w/cpp/filesystem/path/operator_slash
|
# ? Jan 21, 2024 01:09 |
|
neat, python's pathlib works the same way
|
# ? Jan 21, 2024 05:00 |
|
why wouldn’t you just port Common Lisp’s pathname system it even works with MIT-style versioning filesystems including XNS, Lisp Machine, and VMS filesystems eschaton fucked around with this message at 09:16 on Jan 21, 2024 |
# ? Jan 21, 2024 09:10 |
|
Nomnom Cookie posted:numpy is the fortran monad implemented for Python and it’s much more elegant in haskell where you also get free concurrency and a dot product operator of course it is, its haskell
|
# ? Jan 21, 2024 09:25 |
|
Nomnom Cookie posted:numpy is the fortran monad implemented for Python and it’s much more elegant in haskell where you also get free concurrency and a dot product operator It is easy to heat up all your cores with Haskell and get the right result, but it is harder to make it fast.
|
# ? Jan 21, 2024 13:11 |
|
I was trying to be ridiculous but I guess I undershot
|
# ? Jan 21, 2024 18:33 |
|
Nomnom Cookie posted:I was trying to be ridiculous but I guess I undershot
|
# ? Jan 21, 2024 19:21 |
|
|
# ? Jun 12, 2024 12:43 |
|
Athas posted:It is easy to heat up all your cores with Haskell and get the right result, but it is harder to make it fast. its only 2-3x slower than rust. good enough for me!
|
# ? Jan 22, 2024 03:58 |