Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
ToxicFrog
Apr 26, 2008


DreadCthulhu posted:

Welp, I'm transitioning to Yesod framework on Haskell with our next web app project instead of continuing with the Clojure/Ring route. Microframeworks are fun for simple things and are great for learning why a framework is important. For serious work it's actually really painful to replicate all of the convenience you get from a well thought-out framework with years of work put into it and dozens of much smarter people than me improving it. Easy switching between JSON-only and server-side HTML/CSS/JS generation, boundary validation / input validation, baked-in auth and authz plugins w/ option to write your own, a solid reloading / rebuilding / retesting flow, hashing of assets etc. It's basically like Rails, except everything from HTML templates, request data, to URL params is statically checked, you have to try pretty hard to gently caress it up. Monad transformers are a bit tricky to reason about, and error messages can be nightmarish because of the monad layering, but I think you get used to it once you grok it. Snoyman is about 1000x smarter than me, so I trust his design choices.

As soon as you start reaching scale with Clojure and you need to cut off chunks of logic from the main Ring app, refactor them into shared libraries for multiple projects, you really start feeling the pain of dynamic typing. You better have fantastic code coverage when you're doing any sort of refactoring at that stage or you'll spend weeks chasing runtime regressions. To me it feels that Clojure is really neat for smaller projects, but the cons actually increase as the codebase grows in size. I'm at about 15k lines right now, so it's not even that large. I don't know about most people, but writing thousands of unit tests as a way to replace a compiler i.e. for the sake of validating type sanity seems like a poor use of developer time. I'd rather let a very smart compiler do that for me and spend time testing business logic, not the types.

Coincidentally, I'm just starting work on a project using Luminus (which I understand is more batteries-included than straight Ring) and core.typed for type checking, specifically to address those two issues. I will report back on how it goes!

Adbot
ADBOT LOVES YOU

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!

ToxicFrog posted:

Coincidentally, I'm just starting work on a project using Luminus (which I understand is more batteries-included than straight Ring) and core.typed for type checking, specifically to address those two issues. I will report back on how it goes!

Luminus is basically a bare-bones Ring app with a bunch of dependencies already added for you to project.clj - you can read the source for the entire thing in 5 minutes. Building a micro-framework from the ground up in clj is basically unavoidable at this point, unless something changed dramatically in the path month or so while I wasn't paying attention. Oh yeah, Pedestal, except nobody uses it.

Would be interesting to see how core.typed turns out though.

Deus Rex
Mar 5, 2005

jneen posted:

Well, of course you'll have better luck connecting to oracle on *the jvm*.

Oracle RDBMS is nearly two decades older than the JVM, let alone Oracle's ownership of HotSpot. OP's point is that the library ecosystem as a whole is much more extensive for Java or JVM languages than for Haskell (or basically any other platform), which is pretty inarguable.

Deus Rex fucked around with this message at 22:13 on May 19, 2014

jneen
Feb 8, 2014

Deus Rex posted:

Oracle RDBMS is nearly two decades older than the JVM, let alone Oracle's ownership of HotSpot. OP's point is that the library ecosystem as a whole is much more extensive for Java or JVM languages than for Haskell (or basically any other platform), which is pretty inarguable.

I suppose, although using java in clojure is still using an ffi, like you'd do in Haskell. And there's loads of libraries for C! :newlol:

negationix
May 1, 2007

QuantumNinja posted:

As someone who has written papers on miniKanren, this is incredibly impractical. Every naive miniKanren implementation is far too slow to reasonable compute something like that, and Adderall suggests it isn't focused on speed. More importantly, without something like rKanren's search logic, you'd have to generate hundreds or thousands of levels before you got one interesting enough to play in. A better move would be to write a functional generator, and then use miniKanren to check the level for see if the constraints hold.

If you really want to learn the language, pick up a copy of Reasoned Schemer. Dan's books are gentle and thorough, and it will be a good start.

Hm.. too bad, it sounded like it would have been a good match. But thanks for the tip. I better try out something else then. I'm still going to learn miniKanren though, just because I like the idea.

What about, if I had the topology of the level down already (rooms and their connections to each other) and I would just need to place items and monsters and such in those rooms? Would that simplify/speed up the process? Basically, I just want to write something along the lines:
- there's stairs somewhere in level (player starts here)
- there's treasure as far as possible from the stairs
- between next to treasure is some sort of boss room
- if there's unused dead-end, place a minor treasure there
and so on.

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!
Welp, putting a yesod app in production as a paid product tomorrow, this is going to be interesting. After working with homebrew micro-frameworks for a couple of years it was very interesting to have to go back to that feeling you get with Rails when you have no idea where anything is configured/set/changed/added and you need to learn someone else's interpretation of the universe.

The upside is that obviously the guys maintaining the framework are really vicious about keeping it elegant and effective, so I'm happy with finally being able to delegate all of the "scaffolding" to someone else who knows what they're doing.

Still not fully comfortable with building complex routes with query strings and working with shakesperean templates, I got used to jinja-level of awesome and yesod's paradigm is somewhat different, but it's a good additional point of reference.

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!
Btw if you're not using Schema for your Clojure APIs, you definitely should start now. Type/schema checking + type coercion all in one, makes solving the boundary issue just a bit easier in Clojure. Obviously having static types would be nicer, but that's not happening in Clojure, so might as well get the next best thing.

Deus Rex
Mar 5, 2005

DreadCthulhu posted:

Btw if you're not using Schema for your Clojure APIs, you definitely should start now. Type/schema checking + type coercion all in one, makes solving the boundary issue just a bit easier in Clojure. Obviously having static types would be nicer, but that's not happening in Clojure, so might as well get the next best thing.

What? You know about core.typed, right?

Beef
Jul 26, 2004
To dredge up the lazy evaluation derail/discussion again, just so I can give my informed opinion on the topic:

Haskell's way of achieving non-strictness through lazy evaluation is problematic in two big areas, even to the most experienced Haskellers: concurrency/parallelism and space leaks. The latter is simple to imagine, the runtime has to keep thunks around that might never be evaluated. For the former, imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons. You need to manually sprinkle our code with rpar, rseq and force, based on your deep knowledge of how the bowels of Haskell will digest your code.

Here's hoping that the next Haskell takes up something like 'lenient' evaluation: non-strict evaluation where each expression is evaluated 'eventually'. That is, you still have non-strict semantics but you lose the guarantee that an unused expression is not touched by the evaluator.
Your prime sieve code might grow a few lines, but sometimes the cost is just not worth the headache.

Linking back to Lisp: EM-Lisp had lenient evaluation and was designed from the ground up to run on a parallel architecture.
Additionally, The Lisp family has phased out expressive features in the past in favor of more pragmatic alternatives. For instance: Fexpr were replaced with macros. Fexpr ere strictly more expressive, but were uncompilable and macros fit 99% of the use cases anyway.



(Speaking of Haskell, is anyone attending ICFP next month?)

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Beef posted:

imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons

This example confuses me, because as a fairly experienced Haskell programmer I can't see anything wrong with evaluating that expression in any order you'd care to try. There's no evidence of unsafe sequencing, so I would expect it to always have the same result regardless of evaluation order. What are the "obvious reasons" you're thinking of?

Furthermore, the GHC threaded runtime is hardly "execute all expressions in parallel". There's a significant amount of analysis that goes on under the hood to try to choose reduction orders which both avoid massive lock contention and make good use of parallel resources. In practice, I write real-world programs which regularly get linear speedup up to around 4 parallel threads with no explicit parallel annotations whatsoever, which isn't at all bad for something I'm getting completely for free. I typically never use explicit sequencing annotations except when writing interactive programs where I want to move my latency to different places in the evaluation order for user-interface reasons; the compiler does a good-enough job of discovering actual parallelism opportunities that it "just works".

I'm not really clear on your distinction between "lenient" evaluation and Haskell's "lazy" evaluation. The semantics of Haskell allow the evaluator to speculatively reduce thunks which may never be needed. It is only required to avoid speculative evaluations which would cause nontermination of a program which would terminate with nonspeculative evaluation, but in a parallel evaluator that's trivially accomplished by, for example, always having one thread working on the nonspeculative tasks.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Beef posted:

To dredge up the lazy evaluation derail/discussion again, just so I can give my informed opinion on the topic:

Haskell's way of achieving non-strictness through lazy evaluation is problematic in two big areas, even to the most experienced Haskellers: concurrency/parallelism and space leaks. The latter is simple to imagine, the runtime has to keep thunks around that might never be evaluated. For the former, imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons. You need to manually sprinkle our code with rpar, rseq and force, based on your deep knowledge of how the bowels of Haskell will digest your code.

Can you provide a derivation or set of small-step operational semantics for this lenient evaluator? Or, better yet, since this is a lisp thread just post a small interpreter.

I ask because it seems like you are merely describing strong normalization, and there are already language that do this. Agda, for example, only guarantees strong normalization, and there are both eager and lazy implementations of it. Unfortunately, there are still needs for an escape hatch, and even in a strongly normalizing context it is difficult to specify how parallelism will proceed without thread waits, either implicit or explicit. I don't see how this addresses the problem in general.

Beef posted:

Additionally, The Lisp family has phased out expressive features in the past in favor of more pragmatic alternatives. For instance: Fexpr were replaced with macros. Fexpr ere strictly more expressive, but were uncompilable and macros fit 99% of the use cases.

Can you give me an example of something you can do with Fexpr but not with syntax-case?

(Gothenburg, is, unfortunately, too rich for my blood.)

Edit: in other news, the Scheme Workshop 2015 CFP is going on until September 5.

Beef
Jul 26, 2004

QuantumNinja posted:

Can you provide a derivation or set of small-step operational semantics for this lenient evaluator? Or, better yet, since this is a lisp thread just post a small interpreter.

I ask because it seems like you are merely describing strong normalization, and there are already language that do this. Agda, for example, only guarantees strong normalization, and there are both eager and lazy implementations of it. Unfortunately, there are still needs for an escape hatch, and even in a strongly normalizing context it is difficult to specify how parallelism will proceed without thread waits, either implicit or explicit. I don't see how this addresses the problem in general.

Can you give me an example of something you can do with Fexpr but not with syntax-case?

(Gothenburg, is, unfortunately, too rich for my blood.)

Edit: in other news, the Scheme Workshop 2015 CFP is going on until September 5.

Writing a complete semantics for it a bit :effort: considering the concurrency involved. If you are interested in the detailed semantics you can check out this old paper:
S. Aditya, Arvind, J.-W. Maessen, L. Augustsson, and R. S. Nikhil, “Semantics of pH: A parallel dialect of Haskell,” presented at the FPCA 95, 1995, pp. 35–49.


Here's my take on it with a dumb sketch:

If this is eager evaluation:
code:
(define (apply fn args)
  (funcall fn (map eval args)))
and this is a possible way to do lazy evaluation:
code:
(define (apply fn args)
  (funcall fn (map make-thunk args)))
(define (eval expr)
  (switch expr
    ...
    ((lazy? expr) (force-thunk expr)
    ...))
  
then one way of doing "lenient" evaluation is something like:
code:
(define (apply fn args)
  (funcall fn
	   (map (lambda (arg)
		  (spark-thread eval arg))
		args)))
(define (eval expr)
  (switch expr
    ...
    ((future? expr) (if (resolved? expr)
			expr
		        (wait-for-it expr)))
    ...))
where spark-thread returns a future.

In the lenient case you will still evaluate all expressions, you just don't wait for the result if you do not have to, which creates some parallelism. You can do the same with lazy evaluation and as ShoulderDaemon pointed out gets enough parallelism to keeps a small number of threads busy. However, laziness can bite you in the rear end in larger scale programs and hardware cases: a) speculative computing works if the compiler/runtime/you can accurately guess what will contribute to the result b) not all 'sequencings' have the same performance and c) speculatively evaluating '24/0' or various other trapdoor expressions can lead to errors and non-termination that do not exist in the non-speculative case.

ShoulderDaemon posted:

What are the "obvious reasons" you're thinking of?

code:
;; will not cause an exception in the lazy case
(car (lazy-cons 42 42/0)) 
;; similarly
(car (lazy-cons 42 (cause-segfault)))  

ShoulderDaemon posted:

It is only required to avoid speculative evaluations which would cause nontermination of a program which would terminate with nonspeculative evaluation[...]
Which as far as I know involves solving the halting problem. Other solutions exist, true, but limit parallelism or are not general.


quick edit: Check out the awesome: Parallel and Concurrent Programming in Haskell.
another quick edit: Looks like JW Maessen wrote an entire PhD on the subject.

Beef fucked around with this message at 19:09 on Aug 19, 2014

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Beef posted:

speculative computing works if the compiler/runtime/you can accurately guess what will contribute to the result

[...]

ShoulderDaemon posted:

It is only required to avoid speculative evaluations which would cause nontermination of a program which would terminate with nonspeculative evaluation[...]
Which as far as I know involves solving the halting problem. Other solutions exist, true, but limit parallelism or are not general.

In Haskell, assuming that there exists any reduction which will contribute forward progress, the headmost reduction will be one such reduction; this is precisely the guarantee that not being a strict language provides. As a result, as long as any runtime guarantees that headmost reductions will eventually be performed (this is trivial) they will exhibit the termination requirement specified in the language standard. This is a completely general solution which does not limit parallelism and does not require solving the halting problem; all visible reductions are available to be reduced at any time, in any order, as long as you will eventually get around to reducing the head.

I think what has you confused about Haskell is that you have some idea that speculative evaluations are not allowed to produce exceptions or bottom. Consider this case:

code:
let
  const4 w x y z = w
  infiniteloop = length [1..]
  exception = error "this is an exception"
  divzero = 1 `div` 0 -- This is a different sort of exception
in const (length [1..10000000]) infiniteloop exception divzero
A sketch of the reduction graph at the beginning of execution looks something like this:

code:
const4
|
+-------------+------------+---------+
|             |            |         |
length        infiniteloop exception divzero
|             |            |         |
[1..10000000] length       <error>   div
              |                      |
              [1..]                  +-+
                                     | |
                                     1 0
A parallel evaluator is allowed to reduce thunks speculatively in any node here, as long as it will eventually get around to reducing the const4 node (which will promote the first length node to the new head position). Some other nodes will quickly turn into exceptions at some point, but that's fine, because they aren't at the head position of the graph yet so the runtime continues finding new things to do; it is actually very common in Haskell runtimes to have many exception nodes in the reduction graph. The reductions under the infiniteloop node will continue producing new work, forever, but again that's fine because it isn't doing so at the head of the graph and an eventual head reduction will throw away that entire subtree. The head of the graph will always be a nonspeculative operation and is trivial to locate, and so as long as our evaluator always performs head reductions, we get the correct termination behaviour.

We might wind up with an intermediate graph that looks like this, if we picked a particularly degenerate evaluation order:

code:
const4
|
+-------------+------------+-------+
|             |            |       |
length        infiniteloop <error> <error>
|             |
[1..10000000] length
              |
              :
              |
              +-+
              | |
              1 :
                |
                +-+
                | |
                2 :
                  |
                  (...millions and millions of nodes...)
and as long as we get around to scheduling that const4 node at some point, it just turn into

code:
length
|
[1..10000000]
throwing away all of our wasted speculative effort and producing the exact same semantics as a nonspeculative evaluator.

As near as I can tell, this means that your provided semantics for "lenient" evaluation are an allowed semantics for a Haskell runtime, according to the language spec. Haskell doesn't care if you immediately spark a thread for every single visible node in the entire graph, as long as you have a thread scheduler that guarantees that the spark for the head reduction won't be starved forever. Of course, you wouldn't ever do such a thing, because nobody wants to write a thread scheduler that has to deal with a million very short-lived threads.

Beef posted:

not all 'sequencings' have the same performance

This is true, and for high-performance Haskell you would have to explicitly schedule evaluations in order to maximize performance. That said, this is just as true of your "lenient" evaluator; you've simply moved the scheduling problem to the thread scheduler, which isn't likely to be an improvement. In practice, I rarely bother; GHC's speculative evaluator is good enough that I might only get at best one or two more threads worth of linear speedup by manually scheduling, before running into all the other problems with high-performance Haskell.

You might be interested in examining the modern GHC parallel evaluator, which is fairly advanced at this point. It behaves approximately by grouping reductions into subgraphs where it can prove that a reduction at the head of the subgraph will eventually force a reduction everywhere in the subgraph (strictness analysis) in order to reorganize each such subgraph into a single reduction which internally performs eager evaluation, and then starting a number of worker threads equal to the number of CPUs, where each CPU grabs the highest unclaimed reduction on the currently-visible graph and reduces it, then repeats. This trivially accomplishes the task of always having a thread working at the head of the graph, and does an okay job of minimizing the number of actual graph traversals and locks that need to be taken. It's an interesting mix of strict and lazy evaluation, which tends to select useful work for speculative evaluation fairly often.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
You initially outlined the following problem:

Beef posted:

For the former, imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons. You need to manually sprinkle our code with rpar, rseq and force, based on your deep knowledge of how the bowels of Haskell will digest your code.

Here's hoping that the next Haskell takes up something like 'lenient' evaluation: non-strict evaluation where each expression is evaluated 'eventually'. That is, you still have non-strict semantics but you lose the guarantee that an unused expression is not touched by the evaluator.

Unfortunately, as ShoulderDaemon points out, this won't solve the problem of explicit sequencing requirements. And worse, rseq and the like will have to become system-level operators that allow you to directly change the semantics of the evaluator (that is, how function application occurs) to express the desired meaning. Furthermore, system effects will turn into spaghetti in your "lenient" model. Consider a language with call/cc and various semantics:

Eager:
code:
> ((lambda (x y) y)
     (call/cc (lambda (k) (fact-k 5 (begin (set! x 10) (k 10)))
     x)
10
Lazy:
code:
> ((lambda (x y) y)
     (call/cc (lambda (k) (fact-k 5 (begin (set! x 10) (k 10)))
     x)
Exception: variable x not bound
What can we expect from lenient evaluation for this? It really depends on if the expression containing call/cc has completed running (and producing its error, which will likely be ignored) before we try to evaluate the variable x. These lenient semantics have a serious race condition problem built into the underlying evaluator. The semantics you mentioned by Aditya, et al do a lot to address this problem, including the explicit use of operators like rtouch, schedule, and --- (which you would need to fix the above example), but this indicates that the lenient evaluator is subject to the same complaints as before: you'll need to use things equivalent to rseq when necessary, and must understand the nature of the evaluator to know when it's appropriate. I'm also failing to see how this avoids potential space leaks, as you'll have to carefully track when futures become unnecessary or fall out of scope to lose their pointers, and effectful futures may never do this.

Beef
Jul 26, 2004
Wait, what is the controversy here? That there are no reasons for other non-strict evaluation schemes? That there is no reason to move from laziness to some other non-strict evaluation?

I am not saying that something like lenient evaluation is a magic bullet that solves all the mentioned problems. I am saying that removing the possibility of a certain style of code that relies on the 'non-used expr are never touched' makes the implementation of a more parallel Haskell a lot easier. Laziness is a controversial topic and is not strictly (heh) a necessary part of the language. This is from the Haskell-prime wiki: (emphasis mine)

quote:

These are properties or qualities of any Haskell standard we would like to preserve or make sure the language has. This is not about certain extensions or libraries the language should have, but rather properties of the language as a whole.
    ...
  • independent of the evaluation order. The report does not specify nor require lazy evaluation as an evaluation strategy, it only specifies a non-strict semantics.
  • admits an efficient implementation. features that require large amounts of run-time support or non-trivial restrictions on the implementation method should be avoided. (this is a tradeoff and the design space of haskell implementations has not fully been explored so we should be conservative when we can)
  • transformation safe. the language will not have features that cause common optimizations and transformations to become non-meaning-preserving. All lambda calculus transformations should apply. (this is broken by the MonomorphismRestriction (eta-reduction/expansion) and ImplicitParams (beta-reduction))
    ...

It does not mean you can suddenly deal with impurities like in QuantumNinja's example, we're still talking basically the same functional language here. You still need synchronisation and sequencing primitives to deal with outside effects. Incidentally, the I-Vars and M-Vars in Haskell have their origin in the I-structures and M-structures of the 'lenient evaluation' Id and pH languages, mostly because Id was really the first functional language that faced such parallelism/concurrency issues.


quote:

I think what has you confused about Haskell is that you have some idea that speculative evaluations are not allowed to produce exceptions or bottom.

quote:

...as long as you will eventually get around to reducing the head...
You might be overlooking a very important detail here: speculation is only worth it if the vast majority of the speculated execution is useful work. I don't know where the cutoff is in Haskell, but the cutoff in processor branch prediction is around 90%. In addition, you do not want parallelism to introduce the possibility of unbounded memory use.

quote:

I'm also failing to see how this avoids potential space leaks
You just avoid the space leaks that lazy evaluation introduces. As all expressions are eventually evaluated and contribute to the computation, you never keep unevaluated expressions in memory that shouldn't be kept there in the first place.

Beef
Jul 26, 2004
Back to Lisp-chat:

quote:

Can you give me an example of something you can do with Fexpr but not with syntax-case?

Capturing a variable in the surrounding environment is one that pops to mind. What you can express in an FEXPR function but not as defmacro is a lot trickier as it completely depends on the surrounding language. These days fexpr are used as a simple construct to introduce reflection, not just on the expression of the unevaluated arguments, but also on the environment and continuation. Those fexpr aren't exactly the fexpr of maclisp or interlisp though.

aerique
Jul 16, 2008
My Common Lisp hero Gábor Melis won the Kaggle "Higgs Boson Machine Learning Challenge": https://www.kaggle.com/c/higgs-boson/forums/t/10425/code-release/54514#post54514

He also competed in the Planet Wars Google AI Challenge which I 'competed' in as well. He sent me a binary of one of his bots once to test my bot-in-progress against. I can tell you that is not good for one's confidence!

My bot did improve considerably though and thought me to look at the problem differently.

The Planet Wars AI competition was awesome, one of the best I have participated in so far.

aerique
Jul 16, 2008
So, been looking at Haskell to try it out for a toy project. Holy mother of syntax! It seems to be on the opposite end of the syntax-spectrum. It's like they made it intentionally difficult to get you head around.

Of course, these remarks say more about me than about the language and these feelings will subside once I get more comfortable with it, but it sure is a hard language to get started with.

It also has the same amount of zealots that Lisp had in the past, proclaiming how awesome the language is and how it will solve everything while never having used it seriously themselves.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
I picked up an Intel Edison Arduino Kit a few weeks ago and have both SBCL and CCL running on it well. Despite being a dual-core 500MHz CPU, CL-bench shows it as 1/10 to 1/30 the performance of those releases running on my (2.7 GHz quad-core i7) MacBook Pro. I expect most of that is being x86-32 rather than x86-64, as well as being relatively memory-bandwidth-constrained.

Even so it's faster than my Raspberry Pi and it should have enough performance to do some interesting things once I can connect it's I/O up to Lisp!

Votlook
Aug 20, 2005

aerique posted:

It also has the same amount of zealots that Lisp had in the past, proclaiming how awesome the language is and how it will solve everything while never having used it seriously themselves.

And every single one of them writes the same crappy monad tutorial.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Votlook posted:

And every single one of them writes the same crappy monad tutorial.

But you don't understand! Monads are just burrito-wrapped space suits and once you think about them like outer space it is super easy to use them!

pgroce
Oct 24, 2002
At the risk of stating the obvious, people mostly write those tutorials in an effort to understand monads themselves.

Monads (and applicative functors) are neat, but seldom appropriate. There isn't much cultural space in programming for that, unfortunately.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
Got my Edison working with Lisp today by just wrapping the C libmraa using CFFI.

CFFI isn't hard to use once you understand it, but that understanding can be hard to come by without many examples or tutorials beyond the very basics. I puzzled it out though, and got an LED blinking from the CCL REPL!

aerique
Jul 16, 2008

eschaton posted:

CFFI isn't hard to use once you understand it, but that understanding can be hard to come by without many examples or tutorials beyond the very basics

Browsing CL code on f.e. GitHub ought to give a lot of examples as well.

Back in the day, I found it easier to get started with CMUCL's (now SBCL) native FFI and then switching to UFFI (now CFFI, or is that backwards?).

drgnvale
Apr 30, 2004

A sword is not cutlery!
UFFI came before CFFI, and as best I can remember they didn't have much to do with each other. I think UFFI was just a compatibility layer between implementation FFIs, where as CFFI tried to do things a bit differently and could support implementations UFFI didn't. But I haven't used either in 10 years, so I'm probably way off.

leftist heap
Feb 28, 2013

Fun Shoe
Common Lisp Megathread: I haven't used it in 10 years so I'm probably way off

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
Unlike all the other people who have talked about building a new OS in Lisp over the years, Henry Harrington actually did. It's called Mezzano and it runs on x86 hardware, pretty much just virtual machines at this point.

It's pretty rough, but it actually runs. It doesn't self-host yet but it does have a UI and even networking; it has both a local filesystem based on its persistence mechanism and a simple network file protocol for accessing files in a directory on the host. (Also common for the Lisp Machines of yore, and in today's emulators.)

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

eschaton posted:

a local filesystem based on its persistence mechanism

Tell me more :allears:

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

QuantumNinja posted:

Tell me more :allears:

It's all in file/local.lisp in the Mezzano source tree.

In all honesty I think it might be better to have the baseline filesystem be easily writable from "outside," even just using VFAT. Otherwise it will be harder to get to self-hosting; all sources will need to be accessed via the network (though that also has advantages), and compiling and saving could render a system image not only unreadable but unrecoverable.

eschaton fucked around with this message at 16:00 on Mar 1, 2015

Dessert Rose
May 17, 2004

awoken in control of a lucid deep dream...
Hi thread, I just discovered how much I love Lisp after deciding to try Clojure, picking up Emacs and oh god I'm in the parentheses vortex now.

aerique posted:

The Planet Wars AI competition was awesome, one of the best I have participated in so far.

This was what got me into FP in the first place, Haskell was the new hotness and I had some free programming energy at the time, so decided to throw myself at it.

Had a friendly competition with a coworker, both of us learning the language from scratch. Was super fun. I love playing that game manually so writing a bot to do it was pretty amazing.



So, I've had a concept floating around in my head for a little more than a decade. Basically I want to make an artificial life zoo.

I want to create a simple "world", like for example Minecraft, which has some basic rules, and humans can visit as well as computers. And then I want to evolve life there using genetic programming. So people will visit this game world and see it populated with many distinct life forms that have evolved to thrive in this world.

And maybe you can run your own copy with slightly different parameters, and our life forms can travel through "space" to each others' worlds and cross-pollinate.

After about 8 billion CPU-years, maybe they'll start simulating their own universes.

Anyway, it seems like Lisps are essentially made to do GP, certainly better than any of my other known languages (C#, F# and a bunch of procedural p-langs). From a more practical standpoint, Clojure (and elisp too) just make sense to me on a fundamental level. I'm still struggling with the standard issues of learning any new language (idioms, API) but as soon as I have all the parts in front of me they just go together.

Except when I write macros that evaluate to macros. Still don't have the hang of explicit multiple scopes there yet.

It took me maybe three hours to have a hacky, but working function that can take a quoted form and "mutate" it into something else. The world simulation is really easy to write, too. I even found a use for dynamic binding. And it's all easily parallelizable so I'll be able to take advantage of SMP.

I haven't had to go outside of clojure.core for much, except zipper, but I really like the effort put into "practicality" in this language. Someone upthread said that Rich made a large number of the right choices for Clojure and I have to agree. I'm glad that I have access to the entire Java ecosystem without having to write Java.

Sure is a shame about that debuggability though. At least I'm pretty much used to working in languages or on projects where I'm frequently reduced to slime trail debugging anyway.

minidracula
Dec 22, 2007

boo woo boo

Dessert Rose posted:

Anyway, it seems like Lisps are essentially made to do GP, certainly better than any of my other known languages (C#, F# and a bunch of procedural p-langs). From a more practical standpoint, Clojure (and elisp too) just make sense to me on a fundamental level. I'm still struggling with the standard issues of learning any new language (idioms, API) but as soon as I have all the parts in front of me they just go together.
I recommend looking at John Koza's early GP work, all of which was in Common Lisp, for background and inspiration. Since then the GP field has become more varied (at one point Koza's platform was actually re-implemented in Java, running on Beowulf clusters c. 2000 [IIRC]), but I think you'll find a lot to like.

Dessert Rose posted:

I haven't had to go outside of clojure.core for much, except zipper, but I really like the effort put into "practicality" in this language. Someone upthread said that Rich made a large number of the right choices for Clojure and I have to agree. I'm glad that I have access to the entire Java ecosystem without having to write Java.

Sure is a shame about that debuggability though. At least I'm pretty much used to working in languages or on projects where I'm frequently reduced to slime trail debugging anyway.
I agree with both of these statements wrt Clojure, though perhaps in a more negative sense: I want to like it due to (some of) its conscious engineering choices that both temper and deeply inform the language design, but the debugging story is horrendous from what I expect out of a Lisp environment, and is a significant step backward that I find hard to get over.

Dessert Rose
May 17, 2004

awoken in control of a lucid deep dream...

minidracula posted:

I recommend looking at John Koza's early GP work, all of which was in Common Lisp, for background and inspiration. Since then the GP field has become more varied (at one point Koza's platform was actually re-implemented in Java, running on Beowulf clusters c. 2000 [IIRC]), but I think you'll find a lot to like.

Ooo, thanks! It's been really hard to find many up-to-date resources on this stuff. It took quite a long time to find something that explained the basic concepts underpinning exactly how you mutate a program (I had been stuck on "what if the result doesn't even compile?" for a while).

quote:

I agree with both of these statements wrt Clojure, though perhaps in a more negative sense: I want to like it due to (some of) its conscious engineering choices that both temper and deeply inform the language design, but the debugging story is horrendous from what I expect out of a Lisp environment, and is a significant step backward that I find hard to get over.

Yeah, if I had already been spoiled by good Lisp debugging I would probably hate this, but as it is I consider it an unexpected treat when I can inspect the value of a variable at runtime, and being able to try out code quickly in a running context is just unheard of. Composing my program from many smaller chunks that I can test easily in the REPL alleviates a lot of the pain.

minidracula
Dec 22, 2007

boo woo boo

Dessert Rose posted:

Ooo, thanks! It's been really hard to find many up-to-date resources on this stuff. It took quite a long time to find something that explained the basic concepts underpinning exactly how you mutate a program (I had been stuck on "what if the result doesn't even compile?" for a while).
Oh, and also: since you're working in Clojure, and the Clojure version of this seems to be the current hotness that Lee Spector is doing the most development and maintenance work on, you should check out Push & PushGP (http://faculty.hampshire.edu/lspector/push.html) and the Clojure version "Clojush" (https://github.com/lspector/Clojush).

aerique
Jul 16, 2008

Dessert Rose posted:

[Lisp, Planet Wars and Genetic Programming]

Besides what has been mentioned there's a lot of free online sources like f.e.: A Field Guide to Genetic Programming

There was also a Planet Wars team that evolved their bot using GP: http://planetwars.aichallenge.org/profile.php?user_id=4038. It even finished a couple of places above me :-| (But they also got me enthusiastic for GP, cue embarassing (and incorrect) blog post: http://www.aerique.net/blog/2011/01-18-baby-steps-into-genetic-programming.html)

Dessert Rose posted:

Sure is a shame about that debuggability though. At least I'm pretty much used to working in languages or on projects where I'm frequently reduced to slime trail debugging anyway.

You might want to play around with Common Lisp and Slime to enjoy better debugability. Since Quicklisp the library ecosystem isn't that bad.

ToxicFrog
Apr 26, 2008


minidracula posted:

I agree with both of these statements wrt Clojure, though perhaps in a more negative sense: I want to like it due to (some of) its conscious engineering choices that both temper and deeply inform the language design, but the debugging story is horrendous from what I expect out of a Lisp environment, and is a significant step backward that I find hard to get over.

Yeah, that's what makes it hard for me to recommend Clojure, despite the fact that I really enjoy working on it.

Today's syntax error. Root cause: forgot a symbol in :refer [...]. Total length of error message: 119 lines. Amount of that that's useful: 1 line.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Dessert Rose posted:

Hi thread, I just discovered how much I love Lisp after deciding to try Clojure, picking up Emacs and oh god I'm in the parentheses vortex now.

Are you sticking with Clojure or have you jumped into Common Lisp at this point?

If you're only just considering Common Lisp now, I can vouch for Clozure Common Lisp being a decent free environment, including the IDE if you use a Mac. For sticking with emacs/SLIME use you can go with either CCL or SBCL, and any of a number of other Lisp environments.

pgroce
Oct 24, 2002

ToxicFrog posted:

Yeah, that's what makes it hard for me to recommend Clojure, despite the fact that I really enjoy working on it.

Today's syntax error. Root cause: forgot a symbol in :refer [...]. Total length of error message: 119 lines. Amount of that that's useful: 1 line.

You using CIDER? It doesn't have a debugger (they've made noises about incorporating the Ritz debugger, which was informed by Slime), but when you get errors it lets you automatically filter out the tooling and clojure-infrastructure stack frames that are just hiding the actual error 99% of the time.

leftist heap
Feb 28, 2013

Fun Shoe
The Clojure world is slowly rebuilding SLIME.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

rrrrrrrrrrrt posted:

The Clojure world is slowly rebuilding SLIME.

Just as the SLIME world is slowly rebuilding the LispM.

Adbot
ADBOT LOVES YOU

leftist heap
Feb 28, 2013

Fun Shoe
I get why it's all happening and really SLIME probably wasn't ever going to work out for Clojure, but at one point a part of me was really excited at the idea of Clojure being almost a drop-in CL replacement.

  • Locked thread