Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Athas
Aug 6, 2007

fuck that joker
I have never used Racket, but I have written 10000+ SLOC programs in Common Lisp and 10000+ SLOC programs in Haskell. While Common Lisp is more powerful, I have come to prefer Haskell. If you sit down and think about your data structures in advance, the difference in power is not significant, and Haskell does give you more peace of mind.

You will, at times, curse some of the shortcomings of Haskell, just like you will curse the shortcomings of any language. But if you know Lisp, you would know that it could be fixed with proper macros. Yes, there is Template Haskell, but that is a horrible place, and you must never go there.

Adbot
ADBOT LOVES YOU

Athas
Aug 6, 2007

fuck that joker

Smoke_Max posted:

Hi, Haskell newbie here. I was wondering why main doesn't have a [String] -> IO () type signature like other programming languages, like C/C++ or Java. It would be pretty cool, thanks to pattern matching, to be able to do things like this:

code:
main [arg1, arg2] = ...

main _ = do
	name <- getProgName
	putStrLn ("usage: " ++ name ++ " arg1 arg2")

Yet another reason is that there is no guarantee that the contents of argv can be coerced meaningfully to a 'String'. It is just a byte-string, after all. True, 'getArgs' will simply assume that the byte-string encodes text in the native encoding, but at least you have a chance to do something different, which would not be the case if the damage had been done before 'main' was even invoked. Also, the 'String' type is horribly and hilariously inefficient and wasteful, so you may not want to use it.

As for option parsing, I am old-fashioned, so I just use the default GetOpt module. I never saw the point in the more fancy libraries; it's just option parsing.

Athas
Aug 6, 2007

fuck that joker

Pollyanna posted:

What is functional programming poor at or incapable of, really? You can't do classes and procedural stuff, but that's kind of by definition.

The majority of algorithms are developed for languages that permit mutation, and porting can be awkward or lose important performance characteristics. Dynamic Programming is a class of algorithms that is often awkward (unless you can express it as a lazy array or similar).

As far as I know, it is still an open question whether pure functional programming is theoretically less performant than imperative programming, but it is easy to see that far less work has gone into purely functional algorithms.

Athas fucked around with this message at 09:09 on Aug 3, 2015

Athas
Aug 6, 2007

fuck that joker

Pollyanna posted:

What sort of techniques are good for handling data structures in FP? I've already noticed that things like recursion and tail-call optimization needs you to pass in something like an accumulator, your previous value, and you current value, but it's never quite sunk in.

I don't think there is any answer to this question that lies between "use recursion" and "study algebraic recursion schemes". I suggest reading Purely Functional Data Structures, which is an awesome demonstration of how to engineer simple high-performance data structures in both lazy and strict languages. You can skip the theory if it's not your cup of tea.

Unfortunately, in a cruel twist of fate, the data structures you usually want to use in a purely functional language (recursive structures) tend to be terrible for parallelisation, despite the fact that the code itself has no side effects, and thus should not be too hard to run in parallel. I have yet to see a really convincing parallel functional language - NESL is probably the best I have seen, but it's old.

Athas fucked around with this message at 20:50 on Aug 3, 2015

Athas
Aug 6, 2007

fuck that joker

sarehu posted:

I thought Okasaki's book was an exceptionally nice book to read from cover to cover.

It is true that the proofs can be hairy (especially when done in two different ways), and the chapter on data-structural bootstrapping can be hard to understand, but these can be skipped. It's still worthwhile to read the implementations of the data structures and the high-level arguments for why they are efficient.

Athas
Aug 6, 2007

fuck that joker

Hughmoris posted:

I dabble with programming as a hobby, mainly in Perl and Python. Trying my hand at functional programming with Haskell. I've read in multiple places that functional programming makes concurrency/parallel programming much easier, and that functional programming scales really well.

Could someone explain at a simpleton level why these things are easier with functional programming compared to OO/imperative languages?

It is mostly bullshit, and the envisioned trivial parallelism that functional programming supposedly grants has not actually manifested itself.

In principle, functional programs (which need not be written in a functional language) are easier to parallelise, because the absence of mutable state implies the absence of shared mutable state, which is the bane of parallel programming. In a functional language such as Haskell, which has fairly mature parallel programming techniques and libraries, you can spawn concurrent threads that communicate only through message passing. Of course, you can do this in any modern language, although I suppose that in Haskell, the temptation to create just a little bit of shared mutable state ("for performance/convenience, you see") is easier to resist.

The myth of easy parallelism is a little less of a myth when it comes to implicit parallelism (or better: data parallelism), where you do not use explicit threads. A previous poster described 'parmap', which could be a 'map' operation that evaluates each element of the list in parallel. This is true and good, and if the function you are mapping with is a pure function (which it certainly is in Haskell), you are guaranteed that the entire operation is free of race conditions and similar problems. So: parallelism is indeed easy. But: the only reason you want parallelism is to get good performance, and that is still hard. Consider that 'parmap' again - sure, the runtime could launch a thread for every elements, but what if the list has ten million elements? That's a lot of threads and it will kill your performance, especially if each element is very cheap to process. So you need some sort of chunking, and not start more threads than the hardware can readily utilise. And what if there are other 'parmap's in other concurrent threads in the program? You need to control the parallelism exhibited by the entire program, not just each little component, so as not to overwhelm the machine. And of course, whilst the purity of Haskell is very convenient for parallelism, the lazy semantics are most definitely not. In fact, most parallel libraries for Haskell perform eager evaluation, because laziness is totally antithetical to efficient parallelisation.

And when you get to more exotic parallel hardware, such as GPUs, the idea of efficient parallelism being "trivial" or even "easy" falls even more apart.

Now, I do believe functional languages are easier to parallelise, especially for data-parallelism, but you still need really clever compilers and cut-down languages.

I base this on being a PhD student in functional parallel programming and actually having written a compiler for a data-parallel functional language that targets parallel hardware. It's fun. :) But it's not as easy as it's often made out to be.

Athas fucked around with this message at 10:40 on Mar 26, 2016

Athas
Aug 6, 2007

fuck that joker

Ralith posted:

You've got a lot of really good points here, but it isn't any kind of coincidence that Haskell is the place where transactional memory is becoming/has become conventional.

No, but transactional memory is not automatically very performant (again: parallelism is easy, efficient parallelism is not). Furthermore, another reason is probably that GHC is the most dynamic and active research compiler I know of, and run quite well by people both technically and socially competent.

Athas
Aug 6, 2007

fuck that joker

Doc Hawkins posted:

What about Erlang? I don't have any direct experience of it, but people who talk about it go on and on about millions of processes on your laptop or whatever.

That's concurrency, not parallelism. Erlang is very very good, but it's not particularly fast for computation.

Athas
Aug 6, 2007

fuck that joker

Doc Hawkins posted:

I'm not that ignorant, I just thought share-nothing concurrency would lend itself to parallelism too. But thank you for the answer.

Well, it does, but as you said yourself: Erlang lends itself to a programming model where you spawn thousands (or millions) of processes. On a modern multicore processor, you need maybe 32 processes to exploit the hardware, and more just means overhead. Even with Erlangs lightweight threads you end up "simulating" all this parallelism that you don't really need. And on a cluster, performance depends on efficient communication patterns and locality, which Erlang doesn't really expose in a useful way (any process may send a message to any other process). Message passing is presently the only efficient way to program clusters, though.

Efficient parallelism is all about limiting communication and ensuring that the granularity of parallelisation isn't smaller than it has to be.

Athas
Aug 6, 2007

fuck that joker
What is supposed to be the great benefit of transactional memory over threads/processes using message passing anyway? I really like message passing for concurrent systems - it makes it much easier for me to understand what is going on, and it appears simpler to implement too.

Athas
Aug 6, 2007

fuck that joker

the talent deficit posted:

A lot of people hoped it would just be a flag passed to the compiler that made their lovely code work concurrently without having to change any of it.

My experience so far has been that trying to get parallelism, without talking about parallelism in the program itself, is totally futile. Although data-parallel programming can work in the small scale. Man, we ought to have a parallel programming thread. So many opinions.

Athas
Aug 6, 2007

fuck that joker
As is the eventual fate of every functional programmer, I have been developing my own purely functional language. It's still pretty raw and quite simple (I prefer "austere"), supporting neither polymorphism nor higher-order functions (although built-in control structures fake the latter). Its main claim to fame is that the compiler can generate parallel GPU code that runs pretty fast. It can also generate Python+GPU code, which means you can write things like an interactive Mandelbrot explorer that renders the fractal in real time while you zoom and scroll about.

Athas
Aug 6, 2007

fuck that joker
Every optimising compiler belongs in the horror thread. It's always a mess of heuristics, hacks and "good-enough"s.

Athas
Aug 6, 2007

fuck that joker
Does Idris produce decently efficient code nowadays? Does it have arrays? I remember that it was supposed to be more practical than Agda, but I also heard someone claim that runtime efficiency had gotten derailed a bit along the way. I don't know what to believe anymore!

Athas
Aug 6, 2007

fuck that joker

Asymmetrikon posted:

You can kind of hack this into Haskell at the moment - if you use the DataKinds language feature, you can get type-level Nats (and other inductive types), which can be used in those kind of type definitions (see this article for more). It's very limited, though.

My experience is that it is terrible to use in practice. As an experiment, I had a module in my compiler that I decided to implement using all this type-level magic. While I got it to work, I ended up spending a ton of time proving arithmetic identities (like x+(y-z)=(x+y)-z) to satisfy the type checker. This is also a serious PITA in real dependently typed languages, but they generally have tooling to generate trivial but tedious proofs. I have since replaced it with dynamic checks that fail if my desired invariants are invalidated. Works fine and is much easier to understand and modify.

Athas
Aug 6, 2007

fuck that joker
My guess is that it's because (,) behaves like the Reader monad and Either behaves like an error monad, so they figured they should support the same operations. It's rather human-hostile, though.

Athas
Aug 6, 2007

fuck that joker

Serenade posted:

At this moment I'm not interested in being true doom functional, but the siren's song of parallel programming is always alluring.

Much like the sirens of myth, functional parallel programming mostly just leads to a lot of wrecks.

Adbot
ADBOT LOVES YOU

Athas
Aug 6, 2007

fuck that joker

Roadie posted:

In JS, I can treat a function as an object, but I can't go in and muck around with the AST of that function as a normal part of the language.

You also cannot do that in any recent version of Lisp. You can get access to the syntax tree at compile-time via macros, but you can't inspect function objects at runtime (unless you made sure to squirrel away their definition somewhere yourself).

  • Locked thread