Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Athas
Aug 6, 2007

fuck that joker

Doc Hawkins posted:

I'm not that ignorant, I just thought share-nothing concurrency would lend itself to parallelism too. But thank you for the answer.

Well, it does, but as you said yourself: Erlang lends itself to a programming model where you spawn thousands (or millions) of processes. On a modern multicore processor, you need maybe 32 processes to exploit the hardware, and more just means overhead. Even with Erlangs lightweight threads you end up "simulating" all this parallelism that you don't really need. And on a cluster, performance depends on efficient communication patterns and locality, which Erlang doesn't really expose in a useful way (any process may send a message to any other process). Message passing is presently the only efficient way to program clusters, though.

Efficient parallelism is all about limiting communication and ensuring that the granularity of parallelisation isn't smaller than it has to be.

Adbot
ADBOT LOVES YOU

Hughmoris
Apr 21, 2007
Let's go to the abyss!
Thanks for the explanations/thoughts on functional programming and parallelism/concurrency. Still working my way through beginner tutorials in Haskell, think I'm going to drop the $60 on the Haskell Programming From First Principles book.

Hughmoris fucked around with this message at 00:09 on Apr 2, 2016

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Ralith posted:

Sure, but for the typical case of applications where it's not necessary to squeeze every last bit of juice out of your CPU, this represents a legitimate case where it's much easier to write correct parallel code.

As a former PhD student who once wrote an embarrassingly-gushing-in-hindsight essay about TM... kindof? It's very easy to write code that is obviously "correct" with TM, but that means a lot less than you might think. TM — hardware or software — tends to degrade terribly in the face of any actual contention for the memory being accessed, because of course the transaction cannot be applied if any of the memory has been touched. But even absent contention, verifying the validity of the transaction is very expensive in software implementations, and hardware implementations tend to both impose arbitrary implementation limits and add a risk of spurious rejection. This is all exacerbated by the fact that TM as expressed in systems like Haskell makes it very easy to construct enormous transactions which would require a minor miracle to actually apply in a system that isn't vastly over-specced for its load.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

rjmccall posted:

As a former PhD student who once wrote an embarrassingly-gushing-in-hindsight essay about TM... kindof? It's very easy to write code that is obviously "correct" with TM, but that means a lot less than you might think. TM — hardware or software — tends to degrade terribly in the face of any actual contention for the memory being accessed, because of course the transaction cannot be applied if any of the memory has been touched. But even absent contention, verifying the validity of the transaction is very expensive in software implementations, and hardware implementations tend to both impose arbitrary implementation limits and add a risk of spurious rejection. This is all exacerbated by the fact that TM as expressed in systems like Haskell makes it very easy to construct enormous transactions which would require a minor miracle to actually apply in a system that isn't vastly over-specced for its load.
I'll take "easy to write something that performs spectacularly badly" over "easy to write something that silently does the wrong thing" for most projects. In my experience it's much easier to improve the performance of well-behaved but slow code than to fix broken but fast code.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
Hanging under heavy load is not correct behavior either, though, and can be just as hard to reproduce.

Athas
Aug 6, 2007

fuck that joker
What is supposed to be the great benefit of transactional memory over threads/processes using message passing anyway? I really like message passing for concurrent systems - it makes it much easier for me to understand what is going on, and it appears simpler to implement too.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Athas posted:

What is supposed to be the great benefit of transactional memory over threads/processes using message passing anyway? I really like message passing for concurrent systems - it makes it much easier for me to understand what is going on, and it appears simpler to implement too.

A lot of people hoped it would just be a flag passed to the compiler that made their lovely code work concurrently without having to change any of it.

Athas
Aug 6, 2007

fuck that joker

the talent deficit posted:

A lot of people hoped it would just be a flag passed to the compiler that made their lovely code work concurrently without having to change any of it.

My experience so far has been that trying to get parallelism, without talking about parallelism in the program itself, is totally futile. Although data-parallel programming can work in the small scale. Man, we ought to have a parallel programming thread. So many opinions.

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
I thought STM was supposed to be a superior option to locks, not message passing?

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





HappyHippo posted:

I thought STM was supposed to be a superior option to locks, not message passing?

Locks and message passing are two ways to accomplish the same thing.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
Still working my way through the the haskell book and am having a hard time understanding this example. Its an if/else example function:

code:
greetIfCool :: String -> IO ()
greetIfCool coolness =
if cool
  ...
else
  ...
where cool = coolness == "downright frosty yo"
I'm getting tripped up on the where statement. What is that statement telling me?

*I tried to include the full code snipped but SA is flagging me with a cloudflare error... It doesn't like something in the code sample.

Hughmoris fucked around with this message at 02:09 on Apr 11, 2016

Asymmetrikon
Oct 30, 2009

I believe you're a big dork!
"Where" means that they're defining a term used in the definition of the function. It may be a little easier to understand with 'let' syntax and some parens:

code:
greetIfCool :: String -> IO ()
greetIfCool coolness =
  let
    cool = (coolness == "downright frosty yo")
  in
    if cool
    then ...
    else ...
So we're saying that "cool"'s value is true if the argument coolness is equal to "downright frosty yo", then using "cool" as a Boolean value in the if statement.

Hughmoris
Apr 21, 2007
Let's go to the abyss!

Asymmetrikon posted:

"Where" means that they're defining a term used in the definition of the function. It may be a little easier to understand with 'let' syntax and some parens:

code:
greetIfCool :: String -> IO ()
greetIfCool coolness =
  let
    cool = (coolness == "downright frosty yo")
  in
    if cool
    then ...
    else ...
So we're saying that "cool"'s value is true if the argument coolness is equal to "downright frosty yo", then using "cool" as a Boolean value in the if statement.

That makes sense, thanks.

dirby
Sep 21, 2004


Helping goons with math
I just rewrote a bunch of Mathematica code into Haskell and the compiled Haskell runs significantly slower than the interpreted Mathematica code. I wonder if I'm doing recursion inefficiently or something but my code isn't that complicated so I can't imagine why it would be so slow. https://wiki.haskell.org/Haskell_programming_tips#Avoid_explicit_recursion seems to say I should figure out how to write all my recursive functions as maps and folds and such. Can that really make a significant difference to the runtime? Is there anything else I should be looking out for?

VikingofRock
Aug 24, 2008




Lists in Haskell are singly-linked lists, which can be pretty inefficient if you have to traverse them more than once. If you have a lot of lists (or Strings, which are really just lists of chars), you might want to try replacing them with a more efficient data structure (e.g. Vectors or ByteStrings).

Other than that, it's hard to diagnose efficiency problems without seeing any code.

Asymmetrikon
Oct 30, 2009

I believe you're a big dork!
I was interested, so I did the following:

code:
recSum :: Num a => [a] -> a
recSum []    = 0
recSum (h:t) = h + badSum t

iterSum :: Num a => [a] -> a
iterSum = iterSum' 0
  where
    iterSum' a []    = a
    iterSum' a (h:t) = iterSum' (a + h) t

foldlSum :: Num a => [a] -> a
foldlSum = foldl (+) 0

foldrSum :: Num a => [a] -> a
foldrSum = foldr (+) 0

testSum :: ([Int] -> Int) -> Int
testSum f = f $ take 10000000 $ repeat 5

code:
*Main> :set +s
*Main> testSum recSum
50000000
(6.60 secs, 2,271,440,152 bytes)
*Main> testSum iterSum
50000000
(6.57 secs, 2,337,734,568 bytes)
*Main> testSum foldlSum
50000000
(3.32 secs, 1,453,817,584 bytes)
*Main> testSum foldrSum
50000000
(2.24 secs, 1,373,478,464 bytes)
So the builtins do seem to have a significant speed increase over the custom functions.

The real reason to use things like map and fold is for simplicity (most recursive functions follow similar forms which can be abstracted out), but you might also get efficiency gains?

gonadic io
Feb 16, 2011

>>=
Foldl' sum will be faster than any of those, and using arrays more so too.

E: if that's ghci then unless you've changed it, it won't have been optimised either

dirby
Sep 21, 2004


Helping goons with math
Thanks for your input everyone. I did some reading and I'm pretty sure I have a guess at what the key thing slowing my program down is. This is a bit simplified from my real code, but it's close. I'm interested in generating a certain Int "f xs" from many small lists of Ints xs, like "f [0,0,0]" up to "f [20,20,20]". This calculation depends on the f values of lists with lower values a lot, and I worry a huge thunk is being generated (am I saying that right?) instead of just using the f-values we calculated previously like my memoizing Mathematica code does.

The key code is roughly something like this (actual code more complicated):

code:
allpairs :: [Int]->[([Int],[Int])]
allpairs xs = ... --almost every nonnegative ([a,b,c],[d,e,f]) pair where [a+d,b+e,c+f]==xs

mexhelper :: [Int]->Int->Int
mexhelper xs int = if int `notElem` xs then int else mexhelper xs (int+1)
minimalexcludant :: [Int]->Int
minimalexcludant xs = mexhelper xs 0

pairf :: ([Int],[Int]) -> Int
pairf (a,b) = xor (f a) (f b)

f :: [Int]->Int
f xs = derivefromlist $ nub $ map pairf $ allpairs xs
I suspect the issue is that when pairf is called, it doesn't immediately use the fact that we should know f a and f b from previous calculations, and instead we end up calculating a big tree of lists that sum to lists that sum to...before anything gets evaluated. I think the key to fixing this is to rewrite the recursion that makes f work (with foldl'?), but I'm not certain how to go about it.

dirby fucked around with this message at 12:57 on Apr 11, 2016

dirby
Sep 21, 2004


Helping goons with math
After reading a lot about strict vs. lazy and seq and bang patterns and a bunch of other stuff, I just mimicked the explicit memoization outlined in https://wiki.haskell.org/Memoization#Memoization_with_recursion and made no other changes finally got reasonable speed very comparable speeds (maybe half the speed?) to what I was getting with my memoized Mathematica code. That's much more workable than the "slowfib" style calculation is seemingly impossible speed I had been getting before.

Athas
Aug 6, 2007

fuck that joker
As is the eventual fate of every functional programmer, I have been developing my own purely functional language. It's still pretty raw and quite simple (I prefer "austere"), supporting neither polymorphism nor higher-order functions (although built-in control structures fake the latter). Its main claim to fame is that the compiler can generate parallel GPU code that runs pretty fast. It can also generate Python+GPU code, which means you can write things like an interactive Mandelbrot explorer that renders the fractal in real time while you zoom and scroll about.

Zemyla
Aug 6, 2008

I'll take her off your hands. Pleasure doing business with you!
If you were compiling the code, you'd see a much more dramatic decrease in the memory used with foldl', because of something called list fusion. Basically, in GHC, a lot of functions in the Prelude, and a good number of functions in other modules, use rewrite rules to pretend a list is actually a function that produces elements of the list on demand.

The key here is GHC.Base.build, which uses the RankNTypes extension (which allows polymorphism in function arguments):
code:
build :: forall a. (forall b. (a -> b -> b) -> b -> b) -> [a]
build g = g (:) []
If you're familiar with the type of foldr, you may notice a similarity between the two. It's intentional, and there is in fact a rewrite rule that says:
code:
forall k z (g :: forall b. (a -> b -> b) -> b -> b)
foldr k z (build g) = g k z
This enables it to directly consume the list as it would have been produced, without producing an intermediate list at all in the first place.

As an example of what this lets you do, let's take a simple function from the Prelude, map:

code:
map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = f x : map f xs
Looks pretty simple, huh? Well, the rule-rewriting engine will trigger a transformation into:

code:
{-# RULES "map" [~1] forall f xs. map f xs = build (\c n -> foldr (mapFB c f) n xs #-}

mapFB :: (b -> r -> r) -> (a -> b) -> a -> r -> r
mapFB c f = \x ys -> c (f x) ys
This basically causes that map code to be turned into a build which uses a foldr on its argument, causing what would be an intermediate list to vanish. Note that this produces a list in the form of build g as well. If the incipient list is turned into an actual list, then another rule fires:
code:
{-# RULES "mapFB" [1] forall f. foldr (mapFB (:) f) [] = map f #-}
This is controlled by inlining level annotations: the [1] means "only use this rule on inlining levels 1 and 0" and [~1] means "use this rule on all levels before 1" - there are 5 levels, and in order they are 4, 3, 2, 1, and 0.

What all this means is that, if you were to use foldl' to do the summation, then the compiler would use rules, inlining, strictness analysis, arity analysis, and others (they're tedious to perform by hand, but not complicated or obscure) to turn:
code:
value :: Int
value = foldl' (+) 0 $ take 10000000 $ repeat 5
into
code:
value :: Int
value = let
    xs !m !z = case m of
        1 -> z + 5
        _ -> xs (m - 1) (z + 5)
    in xs 10000000 0
And a simple tail-recursive function with a strict index can easily be turned into a loop in assembly.

So the lesson is that if you're consuming a list, use either foldr (if you're producing a lazy structure or consuming a possibly-infinite list) or foldl' (if you're producing something strict like a number and you know the list is finite).

Side note: All of this happens if you consume the list at the same time you produce it. If you have a list with a billion elements, calculate its length, and the later calculate its sum, then you're going to be hanging on to a list with a billion elements in memory. Sucks to be you.

Jarl
Nov 8, 2007

So what if I'm not for the ever offended?
Bought the book Learn You a Haskell for Great Good. I like both the language and the book and have just finished Applicative Functors leaving only Monoids, Monads and Zippers.

The book has a few (very few) errors (thank god not in the code though), and I think a handful of things would have benefited from having how they work behind the curtains explained in more detail. Although, I suppose that only applies if you are as curios as me about how things really work and need to know.

I'm not blazing through the book; partly because I don't have much free time with the necessary energy after work and partly because some concepts and how they work takes a while to settle.

I get applicative functors now, but it sure took some effort. With that in mind should I prepare myself for a long haul with monoids and monads, or am I already almost there conceptually having understood applicative functors?

sarehu
Apr 20, 2007

(call/cc call/cc)

Jarl posted:

Bought the book Learn You a Haskell for Great Good. I like both the language and the book and have just finished Applicative Functors leaving only Monoids, Monads and Zippers.

The book has a few (very few) errors (thank god not in the code though),

What errors?

Jarl posted:

I get applicative functors now, but it sure took some effort. With that in mind should I prepare myself for a long haul with monoids and monads, or am I already almost there conceptually having understood applicative functors?

Monoids are very easy -- an associative operator with an identity element. For example, adding integers, multiplying integers, matrix multiplication, string concatenation. Monads are a generalization of the only sane API for constructing I/O actions in a "pure" functional manner -- and another associative law that makes sense.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Jarl posted:

I get applicative functors now, but it sure took some effort. With that in mind should I prepare myself for a long haul with monoids and monads, or am I already almost there conceptually having understood applicative functors?

Monoids are easy and, despite the name, they have nothing to do with monads in general (some specific monads have a monoid constraint).

Monads aren't significantly harder to understand than Functors or Applicatives, but it might take you a while to grasp the "why" of using them. It took me quite a while at least.

brap
Aug 23, 2004

Grimey Drawer
Seriously, IO and effects and stuff has nothing to do with the definition of a monad. Set that aside when you first learn about them and try playing with a conceptually simple monad instance like Maybe.

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

sarehu posted:

Monads are a generalization of the only sane API for constructing I/O actions in a "pure" functional manner -- and another associative law that makes sense.

Don't explain monads like this to someone who is skeptical about being able to understand them, thanks.

sink
Sep 10, 2005

gerby gerb gerb in my mouf
Use different types of monads.

For normal boring enterprise application development in Scala, I've seen folks just use Option and Future in for comprehensions and never really think about what makes a monad or what you're doing under the hood, or that these are monadic operations, except that one value depends on another.

But if you demonstrate all the other cliche stuff you would use a monad for, like a functional random number generator, or state (generalization of the former) or logging via writer, it's easier to see a pattern and why you would use these things.

Then start thinking about combining effects. I want a machine that gives me logging, the ability to depend on asynchronous values, and an error channel with fail fast semantics. You can build that by stacking monad transformers, which I think of just as 'compressing' monads into another monad. Doing that will give you neat insight into how monads are used in practice.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


I want to try selling functional reactive programming to some people who are vaguely aware of functional programming in general, but not too clear on the specifics. I could give them chapter 1 of Blackheath & Jones, but is there any other good introduction that's worth sending along?

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
Elm does a pretty good job of dumbing it down for front end developers, so maybe the code examples and guides there provide some good practical examples of frp

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Maluco Marinero posted:

Elm does a pretty good job of dumbing it down for front end developers, so maybe the code examples and guides there provide some good practical examples of frp

Seconded. Elm is the delicious cake of front-end development in a strongly-typed language. It's clean, easy, and powerful.

mila kunis
Jun 10, 2011
I'm interested in learning functional programming and I'm thinking of building a web thing as a way to go about it. Is Elixir worth looking into? What are other alternatives?

ynohtna
Feb 16, 2007

backwoods compatible
Illegal Hen

tekz posted:

I'm interested in learning functional programming and I'm thinking of building a web thing as a way to go about it. Is Elixir worth looking into? What are other alternatives?

=>

QuantumNinja posted:

Elm is the delicious cake of front-end development in a strongly-typed language. It's clean, easy, and powerful.

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

Well, Elm and Elixir serve different purposes. Elixir is for the server and Elm is for the client

Asymmetrikon
Oct 30, 2009

I believe you're a big dork!
I've seen a lot of people online doing an Elixir backend and an Elm frontend, and it seems like it works pretty well. Plus, you learn about two wholly different styles of functional programming (impure, dynamic typed, concurrent vs. pure, static typed, non-concurrent.)

tazjin
Jul 24, 2015


Athas posted:

Well, it does, but as you said yourself: Erlang lends itself to a programming model where you spawn thousands (or millions) of processes. On a modern multicore processor, you need maybe 32 processes to exploit the hardware, and more just means overhead. Even with Erlangs lightweight threads you end up "simulating" all this parallelism that you don't really need. And on a cluster, performance depends on efficient communication patterns and locality, which Erlang doesn't really expose in a useful way (any process may send a message to any other process). Message passing is presently the only efficient way to program clusters, though.

Efficient parallelism is all about limiting communication and ensuring that the granularity of parallelisation isn't smaller than it has to be.

Erlang processes are not just about resource utilisation and they don't map to physical threads. A large part of it is simplifying the programming model, doing away with global state and, of course, fault tolerance.

Asymmetrikon posted:

I've seen a lot of people online doing an Elixir backend and an Elm frontend, and it seems like it works pretty well. Plus, you learn about two wholly different styles of functional programming (impure, dynamic typed, concurrent vs. pure, static typed, non-concurrent.)

I second this. Elixir specifically is a very approachable language (Elm maybe a little less so), but the combination has been a very effective tool for lifting programmers out of the dark pit filled with Javascript and Ruby.

Arcsech
Aug 5, 2008

tazjin posted:

I second this. Elixir specifically is a very approachable language (Elm maybe a little less so), but the combination has been a very effective tool for lifting programmers out of the dark pit filled with Javascript and Ruby.

Elm is by far the most approachable purely functional language, and arguably one of the more approachable languages in general because the compiler errors are really, really good, which sounds silly, but given how much of learning new programming concepts is changing things and seeing what works, having the compiler explain very clearly why some things won't work and give suggestions on how to fix it is super helpful for learning.

Arcsech fucked around with this message at 21:49 on Apr 27, 2016

tazjin
Jul 24, 2015


Arcsech posted:

Elm is by far the most approachable purely functional language, and arguably one of the more approachable languages in general because the compiler errors are really, really good, which sounds silly, but given how much of learning new programming concepts is changing things and seeing what works, having the compiler explain very clearly why some things won't work and give suggestions on how to fix it is super helpful for learning.

The compiler errors are indeed fantastic and I hope the GHC crew takes some inspiration for that, but specifically when it comes to web development Elm introduces several radically different concepts at the same time and it's often too much for people.
Keep in mind that frontend developers often have a background in untyped, imperative languages with no formal design - quite literally the opposite of Elm.

For those coming from a functional language the situation looks a lot different

Jo
Jan 24, 2005

:allears:
Soiled Meat
I think I had heard of Elm a while back and it looks like a nice alternative to writing JavaScript. I've got some reservations about it, mostly related to what looks like mixing of styling and code in their examples. Has anyone styled an Elm app with CSS on top of their HTML generation?

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
It would work just fine, there's absolutely no reason you NEED to put CSS in the markup, they're just demonstrating from a do it all in Elm Standpoint. Just use CSS classes instead, write BEM style components.

Adbot
ADBOT LOVES YOU

mila kunis
Jun 10, 2011
How does elm compare to say, using react+redux, which I really like.

  • Locked thread