|
DreadCthulhu posted:Welp, I'm transitioning to Yesod framework on Haskell with our next web app project instead of continuing with the Clojure/Ring route. Microframeworks are fun for simple things and are great for learning why a framework is important. For serious work it's actually really painful to replicate all of the convenience you get from a well thought-out framework with years of work put into it and dozens of much smarter people than me improving it. Easy switching between JSON-only and server-side HTML/CSS/JS generation, boundary validation / input validation, baked-in auth and authz plugins w/ option to write your own, a solid reloading / rebuilding / retesting flow, hashing of assets etc. It's basically like Rails, except everything from HTML templates, request data, to URL params is statically checked, you have to try pretty hard to gently caress it up. Monad transformers are a bit tricky to reason about, and error messages can be nightmarish because of the monad layering, but I think you get used to it once you grok it. Snoyman is about 1000x smarter than me, so I trust his design choices. Coincidentally, I'm just starting work on a project using Luminus (which I understand is more batteries-included than straight Ring) and core.typed for type checking, specifically to address those two issues. I will report back on how it goes!
|
# ? May 19, 2014 17:39 |
|
|
# ? May 5, 2024 04:48 |
|
ToxicFrog posted:Coincidentally, I'm just starting work on a project using Luminus (which I understand is more batteries-included than straight Ring) and core.typed for type checking, specifically to address those two issues. I will report back on how it goes! Luminus is basically a bare-bones Ring app with a bunch of dependencies already added for you to project.clj - you can read the source for the entire thing in 5 minutes. Building a micro-framework from the ground up in clj is basically unavoidable at this point, unless something changed dramatically in the path month or so while I wasn't paying attention. Oh yeah, Pedestal, except nobody uses it. Would be interesting to see how core.typed turns out though.
|
# ? May 19, 2014 19:52 |
|
jneen posted:Well, of course you'll have better luck connecting to oracle on *the jvm*. Oracle RDBMS is nearly two decades older than the JVM, let alone Oracle's ownership of HotSpot. OP's point is that the library ecosystem as a whole is much more extensive for Java or JVM languages than for Haskell (or basically any other platform), which is pretty inarguable. Deus Rex fucked around with this message at 22:13 on May 19, 2014 |
# ? May 19, 2014 22:09 |
|
Deus Rex posted:Oracle RDBMS is nearly two decades older than the JVM, let alone Oracle's ownership of HotSpot. OP's point is that the library ecosystem as a whole is much more extensive for Java or JVM languages than for Haskell (or basically any other platform), which is pretty inarguable. I suppose, although using java in clojure is still using an ffi, like you'd do in Haskell. And there's loads of libraries for C!
|
# ? May 21, 2014 19:33 |
|
QuantumNinja posted:As someone who has written papers on miniKanren, this is incredibly impractical. Every naive miniKanren implementation is far too slow to reasonable compute something like that, and Adderall suggests it isn't focused on speed. More importantly, without something like rKanren's search logic, you'd have to generate hundreds or thousands of levels before you got one interesting enough to play in. A better move would be to write a functional generator, and then use miniKanren to check the level for see if the constraints hold. Hm.. too bad, it sounded like it would have been a good match. But thanks for the tip. I better try out something else then. I'm still going to learn miniKanren though, just because I like the idea. What about, if I had the topology of the level down already (rooms and their connections to each other) and I would just need to place items and monsters and such in those rooms? Would that simplify/speed up the process? Basically, I just want to write something along the lines: - there's stairs somewhere in level (player starts here) - there's treasure as far as possible from the stairs - between next to treasure is some sort of boss room - if there's unused dead-end, place a minor treasure there and so on.
|
# ? May 25, 2014 10:50 |
|
Welp, putting a yesod app in production as a paid product tomorrow, this is going to be interesting. After working with homebrew micro-frameworks for a couple of years it was very interesting to have to go back to that feeling you get with Rails when you have no idea where anything is configured/set/changed/added and you need to learn someone else's interpretation of the universe. The upside is that obviously the guys maintaining the framework are really vicious about keeping it elegant and effective, so I'm happy with finally being able to delegate all of the "scaffolding" to someone else who knows what they're doing. Still not fully comfortable with building complex routes with query strings and working with shakesperean templates, I got used to jinja-level of awesome and yesod's paradigm is somewhat different, but it's a good additional point of reference.
|
# ? Aug 6, 2014 07:19 |
|
Btw if you're not using Schema for your Clojure APIs, you definitely should start now. Type/schema checking + type coercion all in one, makes solving the boundary issue just a bit easier in Clojure. Obviously having static types would be nicer, but that's not happening in Clojure, so might as well get the next best thing.
|
# ? Aug 8, 2014 04:16 |
|
DreadCthulhu posted:Btw if you're not using Schema for your Clojure APIs, you definitely should start now. Type/schema checking + type coercion all in one, makes solving the boundary issue just a bit easier in Clojure. Obviously having static types would be nicer, but that's not happening in Clojure, so might as well get the next best thing. What? You know about core.typed, right?
|
# ? Aug 8, 2014 15:24 |
|
To dredge up the lazy evaluation derail/discussion again, just so I can give my informed opinion on the topic: Haskell's way of achieving non-strictness through lazy evaluation is problematic in two big areas, even to the most experienced Haskellers: concurrency/parallelism and space leaks. The latter is simple to imagine, the runtime has to keep thunks around that might never be evaluated. For the former, imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons. You need to manually sprinkle our code with rpar, rseq and force, based on your deep knowledge of how the bowels of Haskell will digest your code. Here's hoping that the next Haskell takes up something like 'lenient' evaluation: non-strict evaluation where each expression is evaluated 'eventually'. That is, you still have non-strict semantics but you lose the guarantee that an unused expression is not touched by the evaluator. Your prime sieve code might grow a few lines, but sometimes the cost is just not worth the headache. Linking back to Lisp: EM-Lisp had lenient evaluation and was designed from the ground up to run on a parallel architecture. Additionally, The Lisp family has phased out expressive features in the past in favor of more pragmatic alternatives. For instance: Fexpr were replaced with macros. Fexpr ere strictly more expressive, but were uncompilable and macros fit 99% of the use cases anyway. (Speaking of Haskell, is anyone attending ICFP next month?)
|
# ? Aug 14, 2014 17:33 |
|
Beef posted:imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons This example confuses me, because as a fairly experienced Haskell programmer I can't see anything wrong with evaluating that expression in any order you'd care to try. There's no evidence of unsafe sequencing, so I would expect it to always have the same result regardless of evaluation order. What are the "obvious reasons" you're thinking of? Furthermore, the GHC threaded runtime is hardly "execute all expressions in parallel". There's a significant amount of analysis that goes on under the hood to try to choose reduction orders which both avoid massive lock contention and make good use of parallel resources. In practice, I write real-world programs which regularly get linear speedup up to around 4 parallel threads with no explicit parallel annotations whatsoever, which isn't at all bad for something I'm getting completely for free. I typically never use explicit sequencing annotations except when writing interactive programs where I want to move my latency to different places in the evaluation order for user-interface reasons; the compiler does a good-enough job of discovering actual parallelism opportunities that it "just works". I'm not really clear on your distinction between "lenient" evaluation and Haskell's "lazy" evaluation. The semantics of Haskell allow the evaluator to speculatively reduce thunks which may never be needed. It is only required to avoid speculative evaluations which would cause nontermination of a program which would terminate with nonspeculative evaluation, but in a parallel evaluator that's trivially accomplished by, for example, always having one thread working on the nonspeculative tasks.
|
# ? Aug 14, 2014 18:11 |
|
Beef posted:To dredge up the lazy evaluation derail/discussion again, just so I can give my informed opinion on the topic: Can you provide a derivation or set of small-step operational semantics for this lenient evaluator? Or, better yet, since this is a lisp thread just post a small interpreter. I ask because it seems like you are merely describing strong normalization, and there are already language that do this. Agda, for example, only guarantees strong normalization, and there are both eager and lazy implementations of it. Unfortunately, there are still needs for an escape hatch, and even in a strongly normalizing context it is difficult to specify how parallelism will proceed without thread waits, either implicit or explicit. I don't see how this addresses the problem in general. Beef posted:Additionally, The Lisp family has phased out expressive features in the past in favor of more pragmatic alternatives. For instance: Fexpr were replaced with macros. Fexpr ere strictly more expressive, but were uncompilable and macros fit 99% of the use cases. Can you give me an example of something you can do with Fexpr but not with syntax-case? (Gothenburg, is, unfortunately, too rich for my blood.) Edit: in other news, the Scheme Workshop 2015 CFP is going on until September 5.
|
# ? Aug 15, 2014 20:57 |
|
QuantumNinja posted:Can you provide a derivation or set of small-step operational semantics for this lenient evaluator? Or, better yet, since this is a lisp thread just post a small interpreter. Writing a complete semantics for it a bit considering the concurrency involved. If you are interested in the detailed semantics you can check out this old paper: S. Aditya, Arvind, J.-W. Maessen, L. Augustsson, and R. S. Nikhil, “Semantics of pH: A parallel dialect of Haskell,” presented at the FPCA 95, 1995, pp. 35–49. Here's my take on it with a dumb sketch: If this is eager evaluation: code:
code:
code:
In the lenient case you will still evaluate all expressions, you just don't wait for the result if you do not have to, which creates some parallelism. You can do the same with lazy evaluation and as ShoulderDaemon pointed out gets enough parallelism to keeps a small number of threads busy. However, laziness can bite you in the rear end in larger scale programs and hardware cases: a) speculative computing works if the compiler/runtime/you can accurately guess what will contribute to the result b) not all 'sequencings' have the same performance and c) speculatively evaluating '24/0' or various other trapdoor expressions can lead to errors and non-termination that do not exist in the non-speculative case. ShoulderDaemon posted:What are the "obvious reasons" you're thinking of? code:
ShoulderDaemon posted:It is only required to avoid speculative evaluations which would cause nontermination of a program which would terminate with nonspeculative evaluation[...] quick edit: Check out the awesome: Parallel and Concurrent Programming in Haskell. another quick edit: Looks like JW Maessen wrote an entire PhD on the subject. Beef fucked around with this message at 19:09 on Aug 19, 2014 |
# ? Aug 19, 2014 18:58 |
|
Beef posted:speculative computing works if the compiler/runtime/you can accurately guess what will contribute to the result In Haskell, assuming that there exists any reduction which will contribute forward progress, the headmost reduction will be one such reduction; this is precisely the guarantee that not being a strict language provides. As a result, as long as any runtime guarantees that headmost reductions will eventually be performed (this is trivial) they will exhibit the termination requirement specified in the language standard. This is a completely general solution which does not limit parallelism and does not require solving the halting problem; all visible reductions are available to be reduced at any time, in any order, as long as you will eventually get around to reducing the head. I think what has you confused about Haskell is that you have some idea that speculative evaluations are not allowed to produce exceptions or bottom. Consider this case: code:
code:
We might wind up with an intermediate graph that looks like this, if we picked a particularly degenerate evaluation order: code:
code:
As near as I can tell, this means that your provided semantics for "lenient" evaluation are an allowed semantics for a Haskell runtime, according to the language spec. Haskell doesn't care if you immediately spark a thread for every single visible node in the entire graph, as long as you have a thread scheduler that guarantees that the spark for the head reduction won't be starved forever. Of course, you wouldn't ever do such a thing, because nobody wants to write a thread scheduler that has to deal with a million very short-lived threads. Beef posted:not all 'sequencings' have the same performance This is true, and for high-performance Haskell you would have to explicitly schedule evaluations in order to maximize performance. That said, this is just as true of your "lenient" evaluator; you've simply moved the scheduling problem to the thread scheduler, which isn't likely to be an improvement. In practice, I rarely bother; GHC's speculative evaluator is good enough that I might only get at best one or two more threads worth of linear speedup by manually scheduling, before running into all the other problems with high-performance Haskell. You might be interested in examining the modern GHC parallel evaluator, which is fairly advanced at this point. It behaves approximately by grouping reductions into subgraphs where it can prove that a reduction at the head of the subgraph will eventually force a reduction everywhere in the subgraph (strictness analysis) in order to reorganize each such subgraph into a single reduction which internally performs eager evaluation, and then starting a number of worker threads equal to the number of CPUs, where each CPU grabs the highest unclaimed reduction on the currently-visible graph and reduces it, then repeats. This trivially accomplishes the task of always having a thread working at the head of the graph, and does an okay job of minimizing the number of actual graph traversals and locks that need to be taken. It's an interesting mix of strict and lazy evaluation, which tends to select useful work for speculative evaluation fairly often.
|
# ? Aug 19, 2014 19:57 |
|
You initially outlined the following problem:Beef posted:For the former, imagine the program 'firstArg exp-a exp-b 42/0'; you cannot have an evaluator that just executes all expressions in parallel for obvious reasons. You need to manually sprinkle our code with rpar, rseq and force, based on your deep knowledge of how the bowels of Haskell will digest your code. Unfortunately, as ShoulderDaemon points out, this won't solve the problem of explicit sequencing requirements. And worse, rseq and the like will have to become system-level operators that allow you to directly change the semantics of the evaluator (that is, how function application occurs) to express the desired meaning. Furthermore, system effects will turn into spaghetti in your "lenient" model. Consider a language with call/cc and various semantics: Eager: code:
code:
|
# ? Aug 19, 2014 20:28 |
|
Wait, what is the controversy here? That there are no reasons for other non-strict evaluation schemes? That there is no reason to move from laziness to some other non-strict evaluation? I am not saying that something like lenient evaluation is a magic bullet that solves all the mentioned problems. I am saying that removing the possibility of a certain style of code that relies on the 'non-used expr are never touched' makes the implementation of a more parallel Haskell a lot easier. Laziness is a controversial topic and is not strictly (heh) a necessary part of the language. This is from the Haskell-prime wiki: (emphasis mine) quote:These are properties or qualities of any Haskell standard we would like to preserve or make sure the language has. This is not about certain extensions or libraries the language should have, but rather properties of the language as a whole. It does not mean you can suddenly deal with impurities like in QuantumNinja's example, we're still talking basically the same functional language here. You still need synchronisation and sequencing primitives to deal with outside effects. Incidentally, the I-Vars and M-Vars in Haskell have their origin in the I-structures and M-structures of the 'lenient evaluation' Id and pH languages, mostly because Id was really the first functional language that faced such parallelism/concurrency issues. quote:I think what has you confused about Haskell is that you have some idea that speculative evaluations are not allowed to produce exceptions or bottom. quote:...as long as you will eventually get around to reducing the head... quote:I'm also failing to see how this avoids potential space leaks
|
# ? Aug 20, 2014 17:29 |
|
Back to Lisp-chat:quote:Can you give me an example of something you can do with Fexpr but not with syntax-case? Capturing a variable in the surrounding environment is one that pops to mind. What you can express in an FEXPR function but not as defmacro is a lot trickier as it completely depends on the surrounding language. These days fexpr are used as a simple construct to introduce reflection, not just on the expression of the unevaluated arguments, but also on the environment and continuation. Those fexpr aren't exactly the fexpr of maclisp or interlisp though.
|
# ? Aug 20, 2014 17:37 |
|
My Common Lisp hero Gábor Melis won the Kaggle "Higgs Boson Machine Learning Challenge": https://www.kaggle.com/c/higgs-boson/forums/t/10425/code-release/54514#post54514 He also competed in the Planet Wars Google AI Challenge which I 'competed' in as well. He sent me a binary of one of his bots once to test my bot-in-progress against. I can tell you that is not good for one's confidence! My bot did improve considerably though and thought me to look at the problem differently. The Planet Wars AI competition was awesome, one of the best I have participated in so far.
|
# ? Sep 24, 2014 09:24 |
|
So, been looking at Haskell to try it out for a toy project. Holy mother of syntax! It seems to be on the opposite end of the syntax-spectrum. It's like they made it intentionally difficult to get you head around. Of course, these remarks say more about me than about the language and these feelings will subside once I get more comfortable with it, but it sure is a hard language to get started with. It also has the same amount of zealots that Lisp had in the past, proclaiming how awesome the language is and how it will solve everything while never having used it seriously themselves.
|
# ? Sep 25, 2014 20:26 |
|
I picked up an Intel Edison Arduino Kit a few weeks ago and have both SBCL and CCL running on it well. Despite being a dual-core 500MHz CPU, CL-bench shows it as 1/10 to 1/30 the performance of those releases running on my (2.7 GHz quad-core i7) MacBook Pro. I expect most of that is being x86-32 rather than x86-64, as well as being relatively memory-bandwidth-constrained. Even so it's faster than my Raspberry Pi and it should have enough performance to do some interesting things once I can connect it's I/O up to Lisp!
|
# ? Jan 23, 2015 06:50 |
|
aerique posted:It also has the same amount of zealots that Lisp had in the past, proclaiming how awesome the language is and how it will solve everything while never having used it seriously themselves. And every single one of them writes the same crappy monad tutorial.
|
# ? Jan 23, 2015 17:11 |
|
Votlook posted:And every single one of them writes the same crappy monad tutorial. But you don't understand! Monads are just burrito-wrapped space suits and once you think about them like outer space it is super easy to use them!
|
# ? Jan 24, 2015 03:04 |
|
At the risk of stating the obvious, people mostly write those tutorials in an effort to understand monads themselves. Monads (and applicative functors) are neat, but seldom appropriate. There isn't much cultural space in programming for that, unfortunately.
|
# ? Jan 25, 2015 03:14 |
|
Got my Edison working with Lisp today by just wrapping the C libmraa using CFFI. CFFI isn't hard to use once you understand it, but that understanding can be hard to come by without many examples or tutorials beyond the very basics. I puzzled it out though, and got an LED blinking from the CCL REPL!
|
# ? Jan 26, 2015 06:57 |
|
eschaton posted:CFFI isn't hard to use once you understand it, but that understanding can be hard to come by without many examples or tutorials beyond the very basics Browsing CL code on f.e. GitHub ought to give a lot of examples as well. Back in the day, I found it easier to get started with CMUCL's (now SBCL) native FFI and then switching to UFFI (now CFFI, or is that backwards?).
|
# ? Jan 26, 2015 10:45 |
|
UFFI came before CFFI, and as best I can remember they didn't have much to do with each other. I think UFFI was just a compatibility layer between implementation FFIs, where as CFFI tried to do things a bit differently and could support implementations UFFI didn't. But I haven't used either in 10 years, so I'm probably way off.
|
# ? Jan 29, 2015 08:08 |
|
Common Lisp Megathread: I haven't used it in 10 years so I'm probably way off
|
# ? Jan 30, 2015 01:45 |
|
Unlike all the other people who have talked about building a new OS in Lisp over the years, Henry Harrington actually did. It's called Mezzano and it runs on x86 hardware, pretty much just virtual machines at this point. It's pretty rough, but it actually runs. It doesn't self-host yet but it does have a UI and even networking; it has both a local filesystem based on its persistence mechanism and a simple network file protocol for accessing files in a directory on the host. (Also common for the Lisp Machines of yore, and in today's emulators.)
|
# ? Mar 1, 2015 07:31 |
|
eschaton posted:a local filesystem based on its persistence mechanism Tell me more
|
# ? Mar 1, 2015 09:24 |
|
QuantumNinja posted:Tell me more It's all in file/local.lisp in the Mezzano source tree. In all honesty I think it might be better to have the baseline filesystem be easily writable from "outside," even just using VFAT. Otherwise it will be harder to get to self-hosting; all sources will need to be accessed via the network (though that also has advantages), and compiling and saving could render a system image not only unreadable but unrecoverable. eschaton fucked around with this message at 16:00 on Mar 1, 2015 |
# ? Mar 1, 2015 12:57 |
|
Hi thread, I just discovered how much I love Lisp after deciding to try Clojure, picking up Emacs and oh god I'm in the parentheses vortex now.aerique posted:The Planet Wars AI competition was awesome, one of the best I have participated in so far. This was what got me into FP in the first place, Haskell was the new hotness and I had some free programming energy at the time, so decided to throw myself at it. Had a friendly competition with a coworker, both of us learning the language from scratch. Was super fun. I love playing that game manually so writing a bot to do it was pretty amazing. So, I've had a concept floating around in my head for a little more than a decade. Basically I want to make an artificial life zoo. I want to create a simple "world", like for example Minecraft, which has some basic rules, and humans can visit as well as computers. And then I want to evolve life there using genetic programming. So people will visit this game world and see it populated with many distinct life forms that have evolved to thrive in this world. And maybe you can run your own copy with slightly different parameters, and our life forms can travel through "space" to each others' worlds and cross-pollinate. After about 8 billion CPU-years, maybe they'll start simulating their own universes. Anyway, it seems like Lisps are essentially made to do GP, certainly better than any of my other known languages (C#, F# and a bunch of procedural p-langs). From a more practical standpoint, Clojure (and elisp too) just make sense to me on a fundamental level. I'm still struggling with the standard issues of learning any new language (idioms, API) but as soon as I have all the parts in front of me they just go together. Except when I write macros that evaluate to macros. Still don't have the hang of explicit multiple scopes there yet. It took me maybe three hours to have a hacky, but working function that can take a quoted form and "mutate" it into something else. The world simulation is really easy to write, too. I even found a use for dynamic binding. And it's all easily parallelizable so I'll be able to take advantage of SMP. I haven't had to go outside of clojure.core for much, except zipper, but I really like the effort put into "practicality" in this language. Someone upthread said that Rich made a large number of the right choices for Clojure and I have to agree. I'm glad that I have access to the entire Java ecosystem without having to write Java. Sure is a shame about that debuggability though. At least I'm pretty much used to working in languages or on projects where I'm frequently reduced to slime trail debugging anyway.
|
# ? Mar 6, 2015 20:45 |
|
Dessert Rose posted:Anyway, it seems like Lisps are essentially made to do GP, certainly better than any of my other known languages (C#, F# and a bunch of procedural p-langs). From a more practical standpoint, Clojure (and elisp too) just make sense to me on a fundamental level. I'm still struggling with the standard issues of learning any new language (idioms, API) but as soon as I have all the parts in front of me they just go together. Dessert Rose posted:I haven't had to go outside of clojure.core for much, except zipper, but I really like the effort put into "practicality" in this language. Someone upthread said that Rich made a large number of the right choices for Clojure and I have to agree. I'm glad that I have access to the entire Java ecosystem without having to write Java.
|
# ? Mar 6, 2015 20:54 |
|
minidracula posted:I recommend looking at John Koza's early GP work, all of which was in Common Lisp, for background and inspiration. Since then the GP field has become more varied (at one point Koza's platform was actually re-implemented in Java, running on Beowulf clusters c. 2000 [IIRC]), but I think you'll find a lot to like. Ooo, thanks! It's been really hard to find many up-to-date resources on this stuff. It took quite a long time to find something that explained the basic concepts underpinning exactly how you mutate a program (I had been stuck on "what if the result doesn't even compile?" for a while). quote:I agree with both of these statements wrt Clojure, though perhaps in a more negative sense: I want to like it due to (some of) its conscious engineering choices that both temper and deeply inform the language design, but the debugging story is horrendous from what I expect out of a Lisp environment, and is a significant step backward that I find hard to get over. Yeah, if I had already been spoiled by good Lisp debugging I would probably hate this, but as it is I consider it an unexpected treat when I can inspect the value of a variable at runtime, and being able to try out code quickly in a running context is just unheard of. Composing my program from many smaller chunks that I can test easily in the REPL alleviates a lot of the pain.
|
# ? Mar 6, 2015 21:04 |
|
Dessert Rose posted:Ooo, thanks! It's been really hard to find many up-to-date resources on this stuff. It took quite a long time to find something that explained the basic concepts underpinning exactly how you mutate a program (I had been stuck on "what if the result doesn't even compile?" for a while).
|
# ? Mar 6, 2015 21:13 |
|
Dessert Rose posted:[Lisp, Planet Wars and Genetic Programming] Besides what has been mentioned there's a lot of free online sources like f.e.: A Field Guide to Genetic Programming There was also a Planet Wars team that evolved their bot using GP: http://planetwars.aichallenge.org/profile.php?user_id=4038. It even finished a couple of places above me :-| (But they also got me enthusiastic for GP, cue embarassing (and incorrect) blog post: http://www.aerique.net/blog/2011/01-18-baby-steps-into-genetic-programming.html) Dessert Rose posted:Sure is a shame about that debuggability though. At least I'm pretty much used to working in languages or on projects where I'm frequently reduced to slime trail debugging anyway. You might want to play around with Common Lisp and Slime to enjoy better debugability. Since Quicklisp the library ecosystem isn't that bad.
|
# ? Mar 6, 2015 21:53 |
|
minidracula posted:I agree with both of these statements wrt Clojure, though perhaps in a more negative sense: I want to like it due to (some of) its conscious engineering choices that both temper and deeply inform the language design, but the debugging story is horrendous from what I expect out of a Lisp environment, and is a significant step backward that I find hard to get over. Yeah, that's what makes it hard for me to recommend Clojure, despite the fact that I really enjoy working on it. Today's syntax error. Root cause: forgot a symbol in :refer [...]. Total length of error message: 119 lines. Amount of that that's useful: 1 line.
|
# ? Mar 7, 2015 00:24 |
|
Dessert Rose posted:Hi thread, I just discovered how much I love Lisp after deciding to try Clojure, picking up Emacs and oh god I'm in the parentheses vortex now. Are you sticking with Clojure or have you jumped into Common Lisp at this point? If you're only just considering Common Lisp now, I can vouch for Clozure Common Lisp being a decent free environment, including the IDE if you use a Mac. For sticking with emacs/SLIME use you can go with either CCL or SBCL, and any of a number of other Lisp environments.
|
# ? Mar 7, 2015 00:57 |
|
ToxicFrog posted:Yeah, that's what makes it hard for me to recommend Clojure, despite the fact that I really enjoy working on it. You using CIDER? It doesn't have a debugger (they've made noises about incorporating the Ritz debugger, which was informed by Slime), but when you get errors it lets you automatically filter out the tooling and clojure-infrastructure stack frames that are just hiding the actual error 99% of the time.
|
# ? Mar 7, 2015 06:23 |
|
The Clojure world is slowly rebuilding SLIME.
|
# ? Mar 7, 2015 18:12 |
|
rrrrrrrrrrrt posted:The Clojure world is slowly rebuilding SLIME. Just as the SLIME world is slowly rebuilding the LispM.
|
# ? Mar 7, 2015 18:14 |
|
|
# ? May 5, 2024 04:48 |
|
I get why it's all happening and really SLIME probably wasn't ever going to work out for Clojure, but at one point a part of me was really excited at the idea of Clojure being almost a drop-in CL replacement.
|
# ? Mar 7, 2015 18:18 |