Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal

QuantumNinja posted:

Maybe it wasn't clear from the video, or maybe you just didn't watch it, but it leaves the original-arity definition in place, replacing its body with an invocation to the tail-recursive function, so someone using your library that's all TCO doesn't ever have to know.

That would make sense but something he mentioned later in the video made me think that wasn't the case.

I certainly would use what he's made if it stays as an opt-in kind of thing like it is right now. I would love to have tail calls in clojure, and CPS would make lovely stuff like delimited continuations pretty simple to implement I think. I'm just not going to hold my breath waiting for it to become a general part of the language.

Adbot
ADBOT LOVES YOU

shrughes
Oct 11, 2008

(call/cc call/cc)

Malcolm XML posted:

Lazy evaluation is perfectly fine if you understand how forcing works.

If this was true, then people who understand how forcing works would not have trouble with lazy evaluation. However, people who understand how forcing works do have trouble with lazy evaluation. It has problems both in the large and the small, even for the most hardcore Haskellers.

Malcolm XML posted:

Yes because time constrained single person programming contests are reasonable examples of how real world engineering works.

This isn't contradicting the claim I made. Programming contests are far removed from how you'd do any programs, whether or not you plan to use immutable data structures. (In programming contests, you're perfectly happy to throw up global variables, overallocate arrays "just in case," communicate between parts of code in strange ways, etc. They're the most extreme case of just wanting to get stuff working.)

Malcolm XML posted:

When you have to deal with multiple programmers, and ancient code, and have to actually reason about how your program works, immutable data structures are a godsend.

This doesn't contradict the claim I made either, and it's not a statement I'd disagree with (or news to me).

Malcolm XML posted:

Try sharing mutable state between threads and let me know how that goes for you.

I am paid to work on code that shares mutable state and immutable "state" between threads, so I'm quite familiar with how that goes. This also doesn't contradict and isn't even remotely related to the statement I made, so I'm not sure why you're pointing it out.

Malcolm XML posted:

Immutable structures (and non strict semantics) allow for composition.
For example, map f . map g equalling map (f.g) simply does not hold in the presence of seq (the strictness primitive). The entire technique of stream fusion relies on this.

Non-strict semantics have nothing to do with what I was talking about, which is, to remind you, that mutable data structures are often easier to get code working with, and to write and modify code for. Stream fusion does not have to do with what I'm talking about either. I also don't see the point of you explaining this to me, because I am already familiar with these subjects. After all, I am paid to work on code that performs transformations of functional/declarative code in the same vein as stream fusion (including the transformation you've described above).

Malcolm XML posted:

Are there programs where state is very useful? Sure. Does that state have to be mutable? Nope. The State monad hides that plumbing (replacing old state with new state). Now, mutably updating a reference as opposed to creating/destroying memory is probably (not always!) faster but it's an optimization that may destroy other guarantees. (And you can use the ST monad in haskell to get it)

This also has nothing to do with what I was talking about, and I don't understand why you'd explain the State monad to somebody who probably made the first publicly released Parsec-compatible monad transformer parser combinator library.

You haven't said anything that disputes the claim I made, which is that mutable data structures are often easier to get code working with, and to write and modify code for. I described how and why in my previous reply. I don't see why you think reciting irrelevant functional programming talking points would be a counter-argument to this simple claim. Do you realize that I've had (call/cc call/cc) as my custom title for 6 years?

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS

shrughes posted:

mutable data structures are often easier to get code working with, and to write and modify code for. I described how and why in my previous reply.

You did? Please provide quotes from your earlier post.

All I remember from your last post was vague assertions without examples, something about how much you like for loops, and

shrughes posted:

let's go with the stronger claim that avoiding functional programming language constructs like higher-level functions is often a desirable thing, because they make editing code a pain.

Have you tried telling C# programmers that Linq is making editing code a pain, or telling JavaScript coders that Jquery is making editing code a pain? (Both of those use higher-level functions a lot, you see)

shrughes
Oct 11, 2008

(call/cc call/cc)
Do you know what the word "often" means?

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS
Often: frequently; many times.

Incorrect usage: "avoiding functional programming language constructs like higher-level functions is often a desirable thing, because they make editing code a pain." In this case, the writer is incorrect, as programmers have quickly adapted language frameworks and libraries which involve higher-level functions into regular use.

Correct usage: "shrughes often makes posts about technical topics without including concrete examples." Here, the writer is working from a group of five posts and has observed a quality present in all 5 samples.

shrughes
Oct 11, 2008

(call/cc call/cc)

Tequila Bob posted:

Incorrect usage: "avoiding functional programming language constructs like higher-level functions is often a desirable thing, because they make editing code a pain." In this case, the writer is incorrect, as programmers have quickly adapted language frameworks and libraries which involve higher-level functions into regular use.

That people put X into regular use (and the implicit claim that it's good) does not contradict the claim that avoiding X is often a desirable thing.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

shrughes posted:

That people put X into regular use (and the implicit claim that it's good) does not contradict the claim that avoiding X is often a desirable thing.

We are still waiting for a large number of concrete examples where mutable data structures are advantageous; most of the ones from my post can be easily handled with the state monad.

shrughes
Oct 11, 2008

(call/cc call/cc)
That's crazytalk, a state monad is extremely cumbersome. What if you want to change the code to use a new variable? In a braces and semicolons programming language, you just declare the variable and use it. With a state monad you'd have to change the type you're iterating over, if not use StateT, and by now it's a tuple or a record, and you've got the hassle of having all your code having to put stuff in and take stuff out of the product type. Instead of just using a couple of variables.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

shrughes posted:

That's crazytalk, a state monad is extremely cumbersome. What if you want to change the code to use a new variable? In a braces and semicolons programming language, you just declare the variable and use it. With a state monad you'd have to change the type you're iterating over, if not use StateT, and by now it's a tuple or a record, and you've got the hassle of having all your code having to put stuff in and take stuff out of the product type. Instead of just using a couple of variables.

yeah the state monad sucks without do-notation or, since we're in the lisp thread, macros.

Holy poo poo shrughes do you even pay attention to context?

The ST monad is a lot of work because mutable state is tough but macros

quote:

This isn't contradicting the claim I made. Programming contests are far removed from how you'd do any programs, whether or not you plan to use immutable data structures. (In programming contests, you're perfectly happy to throw up global variables, overallocate arrays "just in case," communicate between parts of code in strange ways, etc. They're the most extreme case of just wanting to get stuff working.)

You were the one who brought up programming contests as a good example of where mutable structures are worth it and my point is that it's an irrelevant case for 99% of programmers! (And I disagree: I find immutable structures with nonstrict semantics easier rapidly code because I don't have to worry about poo poo like manually fusing my loops)



quote:


Non-strict semantics have nothing to do with what I was talking about, which is, to remind you, that mutable data structures are often easier to get code working with, and to write and modify code for. Stream fusion does not have to do with what I'm talking about either. I also don't see the point of you explaining this to me, because I am already familiar with these subjects. After all, I am paid to work on code that performs transformations of functional/declarative code in the same vein as stream fusion (including the transformation you've described above).



Stream fusion/shortcut fusion simply does not work the same (or as effectively) without non-strict semantics shrughes. Correctness in the presence of seq is far more difficult so if you're naively doing the same transformations you best guarantee that you aren't subtly breaking code or altering semantics. Perhaps the people that employ you should reconsider paying for someone who doesn't take basic things into account?


quote:

I am paid to work on code that shares mutable state and immutable "state" between threads, so I'm quite familiar with how that goes. This also doesn't contradict and isn't even remotely related to the statement I made, so I'm not sure why you're pointing it out.

Clearly not because you don't have the perspective that comes with dealing with legacy concurrent code that uses shared mutable memory. In a word: it sucks. You have to be extremely careful not to end up in an inconsistent state and it's very expensive in terms of development time to guarantee this. Perhaps for your case mutability works but you aren't most programmers.

And my point was that mutable state and concurrency is a recipe for disaster so it's much easier to get concurrent algos to work on immutable state and then introduce mutable state if needed. Key word: needed. Measure and test, shrughes, that's the core of engineering.

And you're a bad engineer if you're not coding to an interface so you can swap out mutable with immutable structures as needs change but that's neither here nor there

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
And re: laziness being hard for hardcore Haskellers:

I agree, which is why having it explicit is probably a good idea. The same argument applies to mutable state.

shrughes
Oct 11, 2008

(call/cc call/cc)

Malcolm XML posted:

yeah the state monad sucks without do-notation or, since we're in the lisp thread, macros.

Do notation has nothing to do with the problem with using State monads that I talked about.

Malcolm XML posted:

Holy poo poo shrughes do you even pay attention to context?

The ST monad is a lot of work because mutable state is tough but macros

Mutable state isn't tough, just declare a variable and use it. I'm not even sure what you mean by "state", because declaring a variable and using it is something you could very well find convenient in a perfectly pure, encapsulated function.

Malcolm XML posted:

You were the one who brought up programming contests as a good example of where mutable structures are worth it and my point is that it's an irrelevant case for 99% of programmers! (And I disagree: I find immutable structures with nonstrict semantics easier rapidly code because I don't have to worry about poo poo like manually fusing my loops)

We're talking under the assumption that performance doesn't matter.

Malcolm XML posted:

Stream fusion/shortcut fusion simply does not work the same (or as effectively) without non-strict semantics shrughes. Correctness in the presence of seq is far more difficult so if you're naively doing the same transformations you best guarantee that you aren't subtly breaking code or altering semantics. Perhaps the people that employ you should reconsider paying for someone who doesn't take basic things into account?

Last time I checked we weren't transforming a strictly evaluated programming language.

Malcolm XML posted:

Clearly not because you don't have the perspective that

Yet... clearly I do?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

shrughes posted:

That's crazytalk, a state monad is extremely cumbersome. What if you want to change the code to use a new variable? In a braces and semicolons programming language, you just declare the variable and use it. With a state monad you'd have to change the type you're iterating over, if not use StateT, and by now it's a tuple or a record, and you've got the hassle of having all your code having to put stuff in and take stuff out of the product type. Instead of just using a couple of variables.


shrughes posted:

Do notation has nothing to do with the problem with using State monads that I talked about.


Mutable state isn't tough, just declare a variable and use it. I'm not even sure what you mean by "state", because declaring a variable and using it is something you could very well find convenient in a perfectly pure, encapsulated function.


We're talking under the assumption that performance doesn't matter.


Last time I checked we weren't transforming a strictly evaluated programming language.


Yet... clearly I do?

Uh the point of the state monad is that you define all your state and stick it as a record in your state monad.

So you add a field to your record and you're good. You should be new typing raw fields anyway.

Yeah please understand the state monad emulates the convenience of mutable state without having the drawbacks of dealing with shared memory or the syntactic overhead of passing around an extra parameter and receiving the updated parameter along with the result of the function it used to update that parameter

In fact it is isomorphic to just that.

Of course f you're in shrughes land you either hardcode extra parameters or use global mutable state. This would be fine if concurrency weren't a thing, which was my point.

So if you're practicing good engineering discipline either you defensively excluding multiple access around your global mutable state or you go state monad.

Of course, in the real world there's always mutability. So Haskell emulates having an immutable world by passing around a realworld token. In fact it allows controlled access to mutability via iorefs (io is st realworld more or less)

If you don't understand why global mutable state yields concurrency problems idk what to say


So if you're having trouble understanding the mechanism of the state monad, try explicitly adding a state parameter to your functions and then returning a tuple of the functions result and the new state ( or the old state if you didn't modify it)

aerique
Jul 16, 2008
Do you guys feel this discussion is going anywhere? If so, please continue...

shrughes
Oct 11, 2008

(call/cc call/cc)
I feel like I'm being trolled into effortposting. It's as if I have to explain what it's like to write code in C++ or C# or Python. Or even Haskell.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
We're all pretty much just asking you for concrete examples of your claim, which you have been so far unable (or perhaps unwilling) to provide. You said:

shrughes posted:

mutable data structures are often easier to get code working with, and to write and modify code for

So far the only way you've backed this up is by saying:

shrughes posted:

Are you disputing that mutable data structures are often much easier to write algorithms for with? I meant to say "with."
...
Are you disputing that it's generally easier to modify algorithms that are mutating things?

I think everyone is disputing this by asking you to to provide examples where this might be easier. I don't really think that asking you to back up these (seemingly) strong opinions counts as trolling you into effortposting. I am genuinely curious in these situations you keep hinting at.

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS
I think if shrughes actually had a good example, he would have posted it already. Personally, I'm going to assume the discussion is done for now. Let's get back to talking about Lisps!

Has anyone here ever convinced a primarily Java-based team to try a Lisp before? I'm not having a lot of luck in my current team..

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
I think a lot of the problem with the transition of dumping the familiarity of objects for moving parts. I like Scala for this transition (even though it's got a ton of other problems) because it will let you write not-exactly-Java if you want but give you the option to go pretty solidly functional if and when you'd like to. It gets functional programmers into an environment they can thrive in and OO-trapped programmers into a language they can keep tooling with.

Beyond that, I find a lot of the big selling points come when you see how simple data comprehensions and stuff are. Actors are also pretty good for that, but that's kind of another can of worms altogether.

leftist heap
Feb 28, 2013

Fun Shoe
Anyway, after finishing up a very small hobby project in Clojure I'm a little torn on starting a new, bigger project in it vs. CL or something else entirely.

I bitched about it before, but the errors and debugging in Clojure sucks sucks sucks and doesn't seem to be getting better. I really don't get it, because it seems like a huge chunk of the Clojure community just doesn't care about it or something. The Lein REPL out of the box will only give you the last line of a stack trace for some reason, with no option to do otherwise nor does it seem like anyone cares to change it. The first line of stack trace is generally useless JVM/interpreter machinery pretty far removed from your original call. Seeing the whole thing by default alone would be a huge QoL improvement over having to type (pst *e) after every exception.

Tooling is still kind of immature. Cider, ac-cider, et al. are all pretty good, but just ever so slightly jankier or slower or buggier than their CL cousins. I switched to the Cursive EAP for now. It's really good, has great auto-complete and integrated REPL, plus a pretty useful built in debugger. More of a resource hog than Emacs though, especially on my laptop. It's annoying to constantly have the CPU running every so slightly hotter, especially when I'm working on the couch (really silly complaint, I know).

On the other hand, it sure is nice to use a modern language, with access to modern libraries for pretty much everything under the sun and a relatively active community, and a saner build/packaging system and so on. Pretty sure I'll be moving on with Clojure, but it's not without a little bit of sadness.

ToxicFrog
Apr 26, 2008


rrrrrrrrrrrt posted:

Anyway, after finishing up a very small hobby project in Clojure I'm a little torn on starting a new, bigger project in it vs. CL or something else entirely.

I bitched about it before, but the errors and debugging in Clojure sucks sucks sucks and doesn't seem to be getting better. I really don't get it, because it seems like a huge chunk of the Clojure community just doesn't care about it or something. The Lein REPL out of the box will only give you the last line of a stack trace for some reason, with no option to do otherwise nor does it seem like anyone cares to change it. The first line of stack trace is generally useless JVM/interpreter machinery pretty far removed from your original call. Seeing the whole thing by default alone would be a huge QoL improvement over having to type (pst *e) after every exception.

I've spent some time in the IRC channel discussing this and yeah, there's just no interest in unfucking error reporting as far as I can tell. If you implement it yourself it might get accepted, once you badger someone into reviewing it. :( It makes it really hard for me to recommend it, because I have to hedge everything with the caveat that if you ever need to debug something you will want to kill yourself.

Getting the full stack trace is almost as sad as not getting, too, because it's 20+ frames of internal garbage for every frame of your actual code. I've never used a language that shat out as much implementation details on error as Clojure.

quote:

On the other hand, it sure is nice to use a modern language, with access to modern libraries for pretty much everything under the sun and a relatively active community, and a saner build/packaging system and so on. Pretty sure I'll be moving on with Clojure, but it's not without a little bit of sadness.

This sums up my position pretty well.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

ToxicFrog posted:

Getting the full stack trace is almost as sad as not getting, too, because it's 20+ frames of internal garbage for every frame of your actual code. I've never used a language that shat out as much implementation details on error as Clojure.

I wonder if you could exploit Java's internals to suppress some of the internals error messages to clean that up a bit. JVM stack traces are some of the most painful things when writing Java, and I can't imagine what that would be like two levels up.

jneen
Feb 8, 2014
as another semi hardcore haskeller I occasionally come across problems writing compilers where it suddenly becomes this herculean task to do something simple like "add unique tags to each distinct variable binding". maybe I need to study ST more but I had a hell of a time getting it to do anything useful, and it often seems like a smell :\

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
This should work for you. Alternately, you can add the state monad to the pass in question, parameterized by an int or whatever, and build unique IDs (presumably from strings or something) as:

code:
uniqueID :: String -> State Int String
uniqueID var = do tail <- get
                  put (tail + 1)
                  return $ var ++ "." ++ show tail

aerique
Jul 16, 2008

rrrrrrrrrrrt posted:

Anyway, after finishing up a very small hobby project in Clojure I'm a little torn on starting a new, bigger project in it vs. CL or something else entirely.

I'm still a big fan of CL so if you haven't (but I think you have) do give it a try. The library situation isn't as bad as some people picture it and is only getting better with Quicklisp. I also personally like the community (do visit #lisp on Freenode!).

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Tequila Bob posted:

I think if shrughes actually had a good example, he would have posted it already. Personally, I'm going to assume the discussion is done for now. Let's get back to talking about Lisps!

Has anyone here ever convinced a primarily Java-based team to try a Lisp before? I'm not having a lot of luck in my current team..

The best way to convince people to try out a new lang is to show them why it would help them.

Try reimplementing a non-critical piece of functionality in clojure and contrasting it with the old implementation.

SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal

QuantumNinja posted:

I wonder if you could exploit Java's internals to suppress some of the internals error messages to clean that up a bit. JVM stack traces are some of the most painful things when writing Java, and I can't imagine what that would be like two levels up.

There's a function designed for repl use that cleans up the backtrace to remove bits that refer to language internals, but that doesn't really help when you're getting the exception in a log or something at runtime.

I think it would be possible for clojure to catch an exception, and wrap it in a new exception that has a rational stack and have the original gross stack tucked away inside for when it's needed. The problem is probably figuring out where it's okay to put that code since changing exceptions types would not be kosher 90% of the time and just blindly replacing the stack (and not changing types) really isn't any better. It's an unfortunate situation.

ToxicFrog
Apr 26, 2008


SavageMessiah posted:

There's a function designed for repl use that cleans up the backtrace to remove bits that refer to language internals, but that doesn't really help when you're getting the exception in a log or something at runtime.

I think it would be possible for clojure to catch an exception, and wrap it in a new exception that has a rational stack and have the original gross stack tucked away inside for when it's needed. The problem is probably figuring out where it's okay to put that code since changing exceptions types would not be kosher 90% of the time and just blindly replacing the stack (and not changing types) really isn't any better. It's an unfortunate situation.

That's (pst), notable for having documentation that lies constantly about everything (try (pst 10) sometime, then compare the result to what (doc pst) says should happen) and not doing jack poo poo to remove internal frames, at least in 1.5.

In the general case this is kind of hard for Clojure to fix, yeah, both because it compiles directly into JVM bytecode rather than running in an interpreter, and because if you're doing Java interop you do want frames from inside Java code in your stack traces. It could probably be better than it is, though.

Where the error handling is really completely inexcusably bad is the Clojure compiler, which reacts to any sort of invalid input by crashing and dumping 60+ frames all over your terminal. The actual syntax error in your source code is somewhere near the top...sometimes.

minidracula
Dec 22, 2007

boo woo boo

rrrrrrrrrrrt posted:

Anyway, after finishing up a very small hobby project in Clojure I'm a little torn on starting a new, bigger project in it vs. CL or something else entirely.

I bitched about it before, but the errors and debugging in Clojure sucks sucks sucks and doesn't seem to be getting better. I really don't get it, because it seems like a huge chunk of the Clojure community just doesn't care about it or something. The Lein REPL out of the box will only give you the last line of a stack trace for some reason, with no option to do otherwise nor does it seem like anyone cares to change it. The first line of stack trace is generally useless JVM/interpreter machinery pretty far removed from your original call. Seeing the whole thing by default alone would be a huge QoL improvement over having to type (pst *e) after every exception.

Tooling is still kind of immature. Cider, ac-cider, et al. are all pretty good, but just ever so slightly jankier or slower or buggier than their CL cousins. I switched to the Cursive EAP for now. It's really good, has great auto-complete and integrated REPL, plus a pretty useful built in debugger. More of a resource hog than Emacs though, especially on my laptop. It's annoying to constantly have the CPU running every so slightly hotter, especially when I'm working on the couch (really silly complaint, I know).

On the other hand, it sure is nice to use a modern language, with access to modern libraries for pretty much everything under the sun and a relatively active community, and a saner build/packaging system and so on. Pretty sure I'll be moving on with Clojure, but it's not without a little bit of sadness.
Not to join the I-wish-I-could-love-Clojure-but-it-just-won't-let-me bandwagon unnecessarily, but with this post and your previous post, you hit on almost the exact same set of things that, despite me being a huge Common Lisp and Scheme partisan, have kept me at least an arm's length away from Clojure, despite really trying to give it a fair shake.

I keep thinking my distate for Java is what turns me off, but it's more that I really can't avoid Java if I write Clojure even if I wanted to. And I'm not even talking about the Java interop (which is there for perfectly fine and sensible reasons), but the fact that I get dumped to dealing with Java and the JVM, dealing in Java terms, when debugging basically anything. I feel the debugging support that is considered part and parcel of almost any decent CL implementation is completely absent from Clojure, and it really bums me out. I should hasten to point out my information is probably out-of-date though; the last version I seriously played with was 1.3.

I thought (and still do think, to some extent) that ClojureCLR was sort of my end run around this problem, albeit not really a solution per se. But I still think that getting booted to the underlying implementation langauge and machinery underneath is poor form, and at this point, kind of a cop-out. But then, I too haven't done anything to improve the situation, so perhaps I shouldn't complain.

I also don't like lein, but that's a separate thing.

SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal

ToxicFrog posted:

That's (pst), notable for having documentation that lies constantly about everything (try (pst 10) sometime, then compare the result to what (doc pst) says should happen) and not doing jack poo poo to remove internal frames, at least in 1.5.

Actually I was thinking of some of the functions in clojure.stacktrace.

DrankSinatra
Aug 25, 2011
I've been reading up and playing with Common Lisp for a while now. Years ago, I read the first few parts of SICP, and really liked it. Recently, I decided to pick up a more practical dialect of Lisp, and started reading [the rather apropos] "Practical Common Lisp." I'm also reading Paul Graham's "On Lisp." As I'm reading these books, and thinking back on my experience with Scheme along with purely functional languages, I'm a little befuddled by the intermingling of imperative and functional aspects.

Coming from SICP, I'm used to thinking of Lisp as being tail-call heavy, but most Common Lisp code I've seen tends to use looping constructs instead. What's the rationale behind it? I always assumed that modern Common Lisp interpreters/compilers implemented tail-call optimization. Given Lisp's syntax, tail recursion seems a lot cleaner from a readability standpoint, but that may just be due to the novelty [for me] of Common Lisp's loop constructs.

I'm also a bit shaky on how most good Lisp programmers use syntactic forms with side effects. Paul Graham gives some solid guidelines, but I'm not sure how I'd go about applying them in larger, more I/O heavy programs. Is this something people just get a better sense of with experience?

DrankSinatra fucked around with this message at 00:19 on Apr 1, 2014

FIHGT W HUBBY
Aug 16, 2009
In Scheme, tail call optimization is required by the language standard itself, but it isn't required by the ANSI Common Lisp standard. Implementations usually do optimize tail calls, but I guess if you're trying to write standards-compliant code you can't rely on it.

aerique
Jul 16, 2008

DrankSinatra posted:

["Practical Common Lisp", "On Lisp"] As I'm reading these books, and thinking back on my experience with Scheme along with purely functional languages, I'm a little befuddled by the intermingling of imperative and functional aspects.

Common Lisp (CL) is a multi-paradigm language so it doesn't force you to use a specific programming idiom (is this the right word?). With the current functional programming popularity CL code will tend to be more functional as well (mine certainly is) but one is still free to use another idiom if that is more appropriate.

I use LOOP a lot since it is so convenient and powerful while others abhor it and will use map, reduce, some self-defined fold, etc.

DrankSinatra posted:

I'm also a bit shaky on how most good Lisp programmers use syntactic forms with side effects. Paul Graham gives some solid guidelines, but I'm not sure how I'd go about applying them in larger, more I/O heavy programs. Is this something people just get a better sense of with experience?

I think so. I wouldn't call myself a good Lisp programmer but I try to avoid side-effects as much as possible unless absolutely necessary.

pgroce
Oct 24, 2002

aerique posted:

I think so. I wouldn't call myself a good Lisp programmer but I try to avoid side-effects as much as possible unless absolutely necessary.

Every useful program will have side effects. It's not a matter of if you can avoid them, but where you put them.

I agree with the underlying assumption, though; it's a good idea to segregate effectful code from effect-free code as much as possible. This is true in non-lisp, non-FP languages too; it's much easier to test effect-free code, and when you do have to test effectful code, it's easier to test a few general operations and test that those operations are called in the right order in the effect-free code.

There is also the somewhat distinct question of mutable state. Mutating state is inherently effectful, and the less of that you can get away with the better. Even here, though, mutable state is inevitable sometimes, usually for performance or interoperability with someone else's state-mutating code.

One of the big tensions when writing Clojure is that native Clojure favors immutable state, but Java uses mutability all over, and Clojure, in practice, features a lot of Java interop. Clojure's devs have been adept at recognizing that tension and managing it. That's the model I try to emulate -- avoid the avoidable, manage the inevitable.

Shinmera
Mar 25, 2013

I make games!

FIHGT W HUBBY posted:

Implementations usually do optimize tail calls

This is not quite true. Most will offer TCO, but for example SBCL only applies TCO if the optimization settings are (> SPACE DEBUG) or (> SPEED DEBUG).

Here's an extensive look at TCO in the different implementations. Generally I would advise writing a loop rather than recursing.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
In case anyone is interested in what the Lisp Machines of yore were like, Brad Parker's Unlambda has a working, Open Source emulator for the original MIT CADR as well as a working, Open Source emulator for the TI Explorer (a CADR descendant, via LMI).

Both emulators are a bit buggy but functional. The CADR emulator is pretty fast but very bare-bones. The Explorer emulator does more, because the Explorer did more, but it works by simulating every individual bus transaction and is thus extremely slow.

That's honestly pretty crazy, of course: Most emulated systems from that era run orders of magnitude faster than the originals, after all. The unique writable-microcode nature of these systems requires simulating the CPU microcode behavior rather than macroinstruction behavior like most other CPU emulators do. Even so, there are probably ways they could be sped up substantially, opening this chapter of history to a lot more people.

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!
Welp, I'm transitioning to Yesod framework on Haskell with our next web app project instead of continuing with the Clojure/Ring route. Microframeworks are fun for simple things and are great for learning why a framework is important. For serious work it's actually really painful to replicate all of the convenience you get from a well thought-out framework with years of work put into it and dozens of much smarter people than me improving it. Easy switching between JSON-only and server-side HTML/CSS/JS generation, boundary validation / input validation, baked-in auth and authz plugins w/ option to write your own, a solid reloading / rebuilding / retesting flow, hashing of assets etc. It's basically like Rails, except everything from HTML templates, request data, to URL params is statically checked, you have to try pretty hard to gently caress it up. Monad transformers are a bit tricky to reason about, and error messages can be nightmarish because of the monad layering, but I think you get used to it once you grok it. Snoyman is about 1000x smarter than me, so I trust his design choices.

As soon as you start reaching scale with Clojure and you need to cut off chunks of logic from the main Ring app, refactor them into shared libraries for multiple projects, you really start feeling the pain of dynamic typing. You better have fantastic code coverage when you're doing any sort of refactoring at that stage or you'll spend weeks chasing runtime regressions. To me it feels that Clojure is really neat for smaller projects, but the cons actually increase as the codebase grows in size. I'm at about 15k lines right now, so it's not even that large. I don't know about most people, but writing thousands of unit tests as a way to replace a compiler i.e. for the sake of validating type sanity seems like a poor use of developer time. I'd rather let a very smart compiler do that for me and spend time testing business logic, not the types.

Meta prediction: in 10k lines I'll come back here and bitch about how much of a piece of poo poo Haskell is and how Shen is the second coming of Christ.

DreadCthulhu fucked around with this message at 03:57 on May 19, 2014

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS
Haskell is good and all (and Yesod is awesome), but you'll have problems with large Haskell projects too - just different ones than you'd have with Lisp. Lazy-by-default turns into a very leaky abstraction when you start trying to improve performance, and you're going to miss the sheer number of libraries that Clojure(+Java) gave you. (Example 1: Right now, I'm trying to write a Haskell program to connect to Oracle; the only library I've found for this purpose (takusen-oracle) segfaults the instant I create a second connection.) (Example 2: Haskell has no equivalent of Clojure's excellent core.logic, that I know of.)

That said, I'd be interested to hear how you like Haskell/Yesod in a couple weeks' time.

negationix
May 1, 2007
Speaking of core.logic, what would be some good resources to learn more about it? Or miniKanren in general? So far, I have been relying on various blog postings and event talks that have been linked at the official miniKanren site, but something more substantial would be nice. I have heard that the Reasoned Schemer is good and will order it shortly.

The reason for this is, that I would like to be able to write some parts of my game in Adderall (a miniKanren implementation), but I find it hard to make the leap from "if I want to have this kind of list of characters in the end, give me solution how to get there" to "if I have this kinds of constraints and goals, spit out a nifty level for me".

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

negationix posted:

Speaking of core.logic, what would be some good resources to learn more about it? Or miniKanren in general? So far, I have been relying on various blog postings and event talks that have been linked at the official miniKanren site, but something more substantial would be nice. I have heard that the Reasoned Schemer is good and will order it shortly.

The reason for this is, that I would like to be able to write some parts of my game in Adderall (a miniKanren implementation), but I find it hard to make the leap from "if I want to have this kind of list of characters in the end, give me solution how to get there" to "if I have this kinds of constraints and goals, spit out a nifty level for me".

As someone who has written papers on miniKanren, this is incredibly impractical. Every naive miniKanren implementation is far too slow to reasonable compute something like that, and Adderall suggests it isn't focused on speed. More importantly, without something like rKanren's search logic, you'd have to generate hundreds or thousands of levels before you got one interesting enough to play in. A better move would be to write a functional generator, and then use miniKanren to check the level for see if the constraints hold.

If you really want to learn the language, pick up a copy of Reasoned Schemer. Dan's books are gentle and thorough, and it will be a good start.

jneen
Feb 8, 2014

Tequila Bob posted:

Haskell is good and all (and Yesod is awesome), but you'll have problems with large Haskell projects too - just different ones than you'd have with Lisp. Lazy-by-default turns into a very leaky abstraction when you start trying to improve performance, and you're going to miss the sheer number of libraries that Clojure(+Java) gave you. (Example 1: Right now, I'm trying to write a Haskell program to connect to Oracle; the only library I've found for this purpose (takusen-oracle) segfaults the instant I create a second connection.) (Example 2: Haskell has no equivalent of Clojure's excellent core.logic, that I know of.)

That said, I'd be interested to hear how you like Haskell/Yesod in a couple weeks' time.

Oracle? :stonk:

Well, of course you'll have better luck connecting to oracle on *the jvm*.

I am also interested in your experience with Yesod though - I've watched the videos but never used it for a project. I may give it a go if you report a good experience with it.

Adbot
ADBOT LOVES YOU

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!

Tequila Bob posted:

Haskell is good and all (and Yesod is awesome), but you'll have problems with large Haskell projects too - just different ones than you'd have with Lisp. Lazy-by-default turns into a very leaky abstraction when you start trying to improve performance, and you're going to miss the sheer number of libraries that Clojure(+Java) gave you. (Example 1: Right now, I'm trying to write a Haskell program to connect to Oracle; the only library I've found for this purpose (takusen-oracle) segfaults the instant I create a second connection.) (Example 2: Haskell has no equivalent of Clojure's excellent core.logic, that I know of.)

As far as I can tell the lack of libraries thing is slowly being addressed by the community e.g. Elastic Search didn't have a good one until a couple of weeks ago. So at the very least things are moving in the right direction.

Regarding laziness, I've consulted a lot of clever people on that one and I've yet to reach a consensus about it. Even SPJ says that in retrospect it turned out to be too much of a PITA, but it also seems that people who are good at the paradigm don't struggle too much with it. I'm sure writing performant code in any ecosystem is not trivial. Seems also that you end up trading off codebase size for performance, which might fit one's use case: http://www.serpentine.com/blog/2010/03/03/whats-in-a-parser-attoparsec-rewired-2/

  • Locked thread