|
Doc Hawkins posted:I'm not that ignorant, I just thought share-nothing concurrency would lend itself to parallelism too. But thank you for the answer. Well, it does, but as you said yourself: Erlang lends itself to a programming model where you spawn thousands (or millions) of processes. On a modern multicore processor, you need maybe 32 processes to exploit the hardware, and more just means overhead. Even with Erlangs lightweight threads you end up "simulating" all this parallelism that you don't really need. And on a cluster, performance depends on efficient communication patterns and locality, which Erlang doesn't really expose in a useful way (any process may send a message to any other process). Message passing is presently the only efficient way to program clusters, though. Efficient parallelism is all about limiting communication and ensuring that the granularity of parallelisation isn't smaller than it has to be.
|
# ? Mar 26, 2016 20:02 |
|
|
# ? May 14, 2024 06:47 |
|
Thanks for the explanations/thoughts on functional programming and parallelism/concurrency. Still working my way through beginner tutorials in Haskell, think I'm going to drop the $60 on the Haskell Programming From First Principles book.
Hughmoris fucked around with this message at 00:09 on Apr 2, 2016 |
# ? Apr 2, 2016 00:06 |
|
Ralith posted:Sure, but for the typical case of applications where it's not necessary to squeeze every last bit of juice out of your CPU, this represents a legitimate case where it's much easier to write correct parallel code. As a former PhD student who once wrote an embarrassingly-gushing-in-hindsight essay about TM... kindof? It's very easy to write code that is obviously "correct" with TM, but that means a lot less than you might think. TM — hardware or software — tends to degrade terribly in the face of any actual contention for the memory being accessed, because of course the transaction cannot be applied if any of the memory has been touched. But even absent contention, verifying the validity of the transaction is very expensive in software implementations, and hardware implementations tend to both impose arbitrary implementation limits and add a risk of spurious rejection. This is all exacerbated by the fact that TM as expressed in systems like Haskell makes it very easy to construct enormous transactions which would require a minor miracle to actually apply in a system that isn't vastly over-specced for its load.
|
# ? Apr 2, 2016 07:01 |
|
rjmccall posted:As a former PhD student who once wrote an embarrassingly-gushing-in-hindsight essay about TM... kindof? It's very easy to write code that is obviously "correct" with TM, but that means a lot less than you might think. TM — hardware or software — tends to degrade terribly in the face of any actual contention for the memory being accessed, because of course the transaction cannot be applied if any of the memory has been touched. But even absent contention, verifying the validity of the transaction is very expensive in software implementations, and hardware implementations tend to both impose arbitrary implementation limits and add a risk of spurious rejection. This is all exacerbated by the fact that TM as expressed in systems like Haskell makes it very easy to construct enormous transactions which would require a minor miracle to actually apply in a system that isn't vastly over-specced for its load.
|
# ? Apr 2, 2016 09:20 |
|
Hanging under heavy load is not correct behavior either, though, and can be just as hard to reproduce.
|
# ? Apr 2, 2016 16:35 |
|
What is supposed to be the great benefit of transactional memory over threads/processes using message passing anyway? I really like message passing for concurrent systems - it makes it much easier for me to understand what is going on, and it appears simpler to implement too.
|
# ? Apr 3, 2016 05:52 |
|
Athas posted:What is supposed to be the great benefit of transactional memory over threads/processes using message passing anyway? I really like message passing for concurrent systems - it makes it much easier for me to understand what is going on, and it appears simpler to implement too. A lot of people hoped it would just be a flag passed to the compiler that made their lovely code work concurrently without having to change any of it.
|
# ? Apr 3, 2016 06:11 |
|
the talent deficit posted:A lot of people hoped it would just be a flag passed to the compiler that made their lovely code work concurrently without having to change any of it. My experience so far has been that trying to get parallelism, without talking about parallelism in the program itself, is totally futile. Although data-parallel programming can work in the small scale. Man, we ought to have a parallel programming thread. So many opinions.
|
# ? Apr 3, 2016 08:49 |
|
I thought STM was supposed to be a superior option to locks, not message passing?
|
# ? Apr 3, 2016 18:39 |
|
HappyHippo posted:I thought STM was supposed to be a superior option to locks, not message passing? Locks and message passing are two ways to accomplish the same thing.
|
# ? Apr 4, 2016 02:47 |
|
Still working my way through the the haskell book and am having a hard time understanding this example. Its an if/else example function:code:
*I tried to include the full code snipped but SA is flagging me with a cloudflare error... It doesn't like something in the code sample. Hughmoris fucked around with this message at 02:09 on Apr 11, 2016 |
# ? Apr 11, 2016 02:05 |
|
"Where" means that they're defining a term used in the definition of the function. It may be a little easier to understand with 'let' syntax and some parens:code:
|
# ? Apr 11, 2016 02:16 |
|
Asymmetrikon posted:"Where" means that they're defining a term used in the definition of the function. It may be a little easier to understand with 'let' syntax and some parens: That makes sense, thanks.
|
# ? Apr 11, 2016 02:22 |
|
I just rewrote a bunch of Mathematica code into Haskell and the compiled Haskell runs significantly slower than the interpreted Mathematica code. I wonder if I'm doing recursion inefficiently or something but my code isn't that complicated so I can't imagine why it would be so slow. https://wiki.haskell.org/Haskell_programming_tips#Avoid_explicit_recursion seems to say I should figure out how to write all my recursive functions as maps and folds and such. Can that really make a significant difference to the runtime? Is there anything else I should be looking out for?
|
# ? Apr 11, 2016 02:38 |
Lists in Haskell are singly-linked lists, which can be pretty inefficient if you have to traverse them more than once. If you have a lot of lists (or Strings, which are really just lists of chars), you might want to try replacing them with a more efficient data structure (e.g. Vectors or ByteStrings). Other than that, it's hard to diagnose efficiency problems without seeing any code.
|
|
# ? Apr 11, 2016 02:55 |
|
I was interested, so I did the following:code:
code:
The real reason to use things like map and fold is for simplicity (most recursive functions follow similar forms which can be abstracted out), but you might also get efficiency gains?
|
# ? Apr 11, 2016 03:15 |
|
Foldl' sum will be faster than any of those, and using arrays more so too. E: if that's ghci then unless you've changed it, it won't have been optimised either
|
# ? Apr 11, 2016 07:15 |
|
Thanks for your input everyone. I did some reading and I'm pretty sure I have a guess at what the key thing slowing my program down is. This is a bit simplified from my real code, but it's close. I'm interested in generating a certain Int "f xs" from many small lists of Ints xs, like "f [0,0,0]" up to "f [20,20,20]". This calculation depends on the f values of lists with lower values a lot, and I worry a huge thunk is being generated (am I saying that right?) instead of just using the f-values we calculated previously like my memoizing Mathematica code does. The key code is roughly something like this (actual code more complicated): code:
dirby fucked around with this message at 12:57 on Apr 11, 2016 |
# ? Apr 11, 2016 12:54 |
|
After reading a lot about strict vs. lazy and seq and bang patterns and a bunch of other stuff, I just mimicked the explicit memoization outlined in https://wiki.haskell.org/Memoization#Memoization_with_recursion and made no other changes finally got reasonable speed very comparable speeds (maybe half the speed?) to what I was getting with my memoized Mathematica code. That's much more workable than the "slowfib" style calculation is seemingly impossible speed I had been getting before.
|
# ? Apr 13, 2016 02:49 |
|
As is the eventual fate of every functional programmer, I have been developing my own purely functional language. It's still pretty raw and quite simple (I prefer "austere"), supporting neither polymorphism nor higher-order functions (although built-in control structures fake the latter). Its main claim to fame is that the compiler can generate parallel GPU code that runs pretty fast. It can also generate Python+GPU code, which means you can write things like an interactive Mandelbrot explorer that renders the fractal in real time while you zoom and scroll about.
|
# ? Apr 16, 2016 12:54 |
|
If you were compiling the code, you'd see a much more dramatic decrease in the memory used with foldl', because of something called list fusion. Basically, in GHC, a lot of functions in the Prelude, and a good number of functions in other modules, use rewrite rules to pretend a list is actually a function that produces elements of the list on demand. The key here is GHC.Base.build, which uses the RankNTypes extension (which allows polymorphism in function arguments): code:
code:
As an example of what this lets you do, let's take a simple function from the Prelude, map: code:
code:
code:
What all this means is that, if you were to use foldl' to do the summation, then the compiler would use rules, inlining, strictness analysis, arity analysis, and others (they're tedious to perform by hand, but not complicated or obscure) to turn: code:
code:
So the lesson is that if you're consuming a list, use either foldr (if you're producing a lazy structure or consuming a possibly-infinite list) or foldl' (if you're producing something strict like a number and you know the list is finite). Side note: All of this happens if you consume the list at the same time you produce it. If you have a list with a billion elements, calculate its length, and the later calculate its sum, then you're going to be hanging on to a list with a billion elements in memory. Sucks to be you.
|
# ? Apr 17, 2016 07:21 |
|
Bought the book Learn You a Haskell for Great Good. I like both the language and the book and have just finished Applicative Functors leaving only Monoids, Monads and Zippers. The book has a few (very few) errors (thank god not in the code though), and I think a handful of things would have benefited from having how they work behind the curtains explained in more detail. Although, I suppose that only applies if you are as curios as me about how things really work and need to know. I'm not blazing through the book; partly because I don't have much free time with the necessary energy after work and partly because some concepts and how they work takes a while to settle. I get applicative functors now, but it sure took some effort. With that in mind should I prepare myself for a long haul with monoids and monads, or am I already almost there conceptually having understood applicative functors?
|
# ? Apr 19, 2016 14:53 |
|
Jarl posted:Bought the book Learn You a Haskell for Great Good. I like both the language and the book and have just finished Applicative Functors leaving only Monoids, Monads and Zippers. What errors? Jarl posted:I get applicative functors now, but it sure took some effort. With that in mind should I prepare myself for a long haul with monoids and monads, or am I already almost there conceptually having understood applicative functors? Monoids are very easy -- an associative operator with an identity element. For example, adding integers, multiplying integers, matrix multiplication, string concatenation. Monads are a generalization of the only sane API for constructing I/O actions in a "pure" functional manner -- and another associative law that makes sense.
|
# ? Apr 19, 2016 15:33 |
|
Jarl posted:I get applicative functors now, but it sure took some effort. With that in mind should I prepare myself for a long haul with monoids and monads, or am I already almost there conceptually having understood applicative functors? Monoids are easy and, despite the name, they have nothing to do with monads in general (some specific monads have a monoid constraint). Monads aren't significantly harder to understand than Functors or Applicatives, but it might take you a while to grasp the "why" of using them. It took me quite a while at least.
|
# ? Apr 19, 2016 15:37 |
|
Seriously, IO and effects and stuff has nothing to do with the definition of a monad. Set that aside when you first learn about them and try playing with a conceptually simple monad instance like Maybe.
|
# ? Apr 19, 2016 17:50 |
|
sarehu posted:Monads are a generalization of the only sane API for constructing I/O actions in a "pure" functional manner -- and another associative law that makes sense. Don't explain monads like this to someone who is skeptical about being able to understand them, thanks.
|
# ? Apr 20, 2016 02:56 |
|
Use different types of monads. For normal boring enterprise application development in Scala, I've seen folks just use Option and Future in for comprehensions and never really think about what makes a monad or what you're doing under the hood, or that these are monadic operations, except that one value depends on another. But if you demonstrate all the other cliche stuff you would use a monad for, like a functional random number generator, or state (generalization of the former) or logging via writer, it's easier to see a pattern and why you would use these things. Then start thinking about combining effects. I want a machine that gives me logging, the ability to depend on asynchronous values, and an error channel with fail fast semantics. You can build that by stacking monad transformers, which I think of just as 'compressing' monads into another monad. Doing that will give you neat insight into how monads are used in practice.
|
# ? Apr 20, 2016 18:55 |
|
I want to try selling functional reactive programming to some people who are vaguely aware of functional programming in general, but not too clear on the specifics. I could give them chapter 1 of Blackheath & Jones, but is there any other good introduction that's worth sending along?
|
# ? Apr 21, 2016 00:27 |
|
Elm does a pretty good job of dumbing it down for front end developers, so maybe the code examples and guides there provide some good practical examples of frp
|
# ? Apr 21, 2016 00:36 |
|
Maluco Marinero posted:Elm does a pretty good job of dumbing it down for front end developers, so maybe the code examples and guides there provide some good practical examples of frp Seconded. Elm is the delicious cake of front-end development in a strongly-typed language. It's clean, easy, and powerful.
|
# ? Apr 21, 2016 03:29 |
|
I'm interested in learning functional programming and I'm thinking of building a web thing as a way to go about it. Is Elixir worth looking into? What are other alternatives?
|
# ? Apr 27, 2016 06:11 |
|
tekz posted:I'm interested in learning functional programming and I'm thinking of building a web thing as a way to go about it. Is Elixir worth looking into? What are other alternatives? => QuantumNinja posted:Elm is the delicious cake of front-end development in a strongly-typed language. It's clean, easy, and powerful.
|
# ? Apr 27, 2016 09:02 |
|
Well, Elm and Elixir serve different purposes. Elixir is for the server and Elm is for the client
|
# ? Apr 27, 2016 09:49 |
|
I've seen a lot of people online doing an Elixir backend and an Elm frontend, and it seems like it works pretty well. Plus, you learn about two wholly different styles of functional programming (impure, dynamic typed, concurrent vs. pure, static typed, non-concurrent.)
|
# ? Apr 27, 2016 14:34 |
|
Athas posted:Well, it does, but as you said yourself: Erlang lends itself to a programming model where you spawn thousands (or millions) of processes. On a modern multicore processor, you need maybe 32 processes to exploit the hardware, and more just means overhead. Even with Erlangs lightweight threads you end up "simulating" all this parallelism that you don't really need. And on a cluster, performance depends on efficient communication patterns and locality, which Erlang doesn't really expose in a useful way (any process may send a message to any other process). Message passing is presently the only efficient way to program clusters, though. Erlang processes are not just about resource utilisation and they don't map to physical threads. A large part of it is simplifying the programming model, doing away with global state and, of course, fault tolerance. Asymmetrikon posted:I've seen a lot of people online doing an Elixir backend and an Elm frontend, and it seems like it works pretty well. Plus, you learn about two wholly different styles of functional programming (impure, dynamic typed, concurrent vs. pure, static typed, non-concurrent.) I second this. Elixir specifically is a very approachable language (Elm maybe a little less so), but the combination has been a very effective tool for lifting programmers out of the dark pit filled with Javascript and Ruby.
|
# ? Apr 27, 2016 17:07 |
|
tazjin posted:I second this. Elixir specifically is a very approachable language (Elm maybe a little less so), but the combination has been a very effective tool for lifting programmers out of the dark pit filled with Javascript and Ruby. Elm is by far the most approachable purely functional language, and arguably one of the more approachable languages in general because the compiler errors are really, really good, which sounds silly, but given how much of learning new programming concepts is changing things and seeing what works, having the compiler explain very clearly why some things won't work and give suggestions on how to fix it is super helpful for learning. Arcsech fucked around with this message at 21:49 on Apr 27, 2016 |
# ? Apr 27, 2016 21:46 |
|
Arcsech posted:Elm is by far the most approachable purely functional language, and arguably one of the more approachable languages in general because the compiler errors are really, really good, which sounds silly, but given how much of learning new programming concepts is changing things and seeing what works, having the compiler explain very clearly why some things won't work and give suggestions on how to fix it is super helpful for learning. The compiler errors are indeed fantastic and I hope the GHC crew takes some inspiration for that, but specifically when it comes to web development Elm introduces several radically different concepts at the same time and it's often too much for people. Keep in mind that frontend developers often have a background in untyped, imperative languages with no formal design - quite literally the opposite of Elm. For those coming from a functional language the situation looks a lot different
|
# ? Apr 28, 2016 00:26 |
I think I had heard of Elm a while back and it looks like a nice alternative to writing JavaScript. I've got some reservations about it, mostly related to what looks like mixing of styling and code in their examples. Has anyone styled an Elm app with CSS on top of their HTML generation?
|
|
# ? Apr 28, 2016 22:12 |
|
It would work just fine, there's absolutely no reason you NEED to put CSS in the markup, they're just demonstrating from a do it all in Elm Standpoint. Just use CSS classes instead, write BEM style components.
|
# ? Apr 28, 2016 22:14 |
|
|
# ? May 14, 2024 06:47 |
|
How does elm compare to say, using react+redux, which I really like.
|
# ? Apr 30, 2016 16:27 |