|
Template Haskell is, as a colleague put it, "the simplest possible thing that could work". It's got a horrible notion of hygiene, a bad interface, and a frankly unfortunate usage pattern. The paper is also a jumbled mess of "but it's like C++ templates", instead of just admitting that they really want syntax-case but don't want to try to get that together and working.
|
# ¿ May 20, 2015 19:38 |
|
|
# ¿ Apr 29, 2024 17:32 |
|
While your analogy is nice, and the types are more or less on point, this explanation evades the real reason why main doesn't have any arguments: the STG machine needs a starting configuration, and doing so while abstracting over input makes a worse abstract machine. Setting up an STG machine with a main with no arguments makes a lot of sense, because you just build everything into the heap and then you can formally define "invoke main with no arguments" as the start configuration for each machine, uniformly. Otherwise, any theoretical thing you want to do with main in the STG machine requires that you have an existential quantification of the input of main, so that you can say 'there exists some input i such that main {i} runs to completion'. This formalism doesn't take care of additional arguments, either, and that means that i cannot firmly be a value---it has to be some constructor. So now your machine went from "run main and watch it tick" to "pick some constructor that your main will immediately match on to extract I/O information like the initial arguments, and then apply it as main {C x1 x2 ... xn}". Unfortunately, even that doesn't alleviate the headache that your proofs are all going to look like, "There exists C, x_1, ..., x_n, such that main {C x1 x2 ... xn} [does whatever you're trying to prove]". As a result, and since Haskell was built up from the STG machine, piece by piece, main takes no input and produces no output. Pushing main into the IO monad doesn't break this formalism, because the IO abstractions are just additional functions that live in the global environment of the STG machine as "build-ins" that behave correctly by the time you get down to the STG machine, but rewriting main to explicitly take or return things ruins the underlying abstract model.
|
# ¿ Jun 4, 2015 02:54 |
|
Barnyard Protein posted:Scheme question, what is the 'right' way to associate a unique function with an instance of a record type? The way you've written this up, I'm not 100% certain what you're exactly trying to do. There isn't enough detail here for me to piece out what make-instructio actually does, and what the procedure it takes is expected to do. Also, why does the lambda take an opcode if it's tied to a particular instance of a particular record? Couldn't it just, you know, know its opcode? I would have expected code that looked like: code:
code:
This approach, is, though, pretty OO-like. I would be was lazier about this, and probably just write something like: code:
QuantumNinja fucked around with this message at 23:00 on Jun 18, 2015 |
# ¿ Jun 18, 2015 22:50 |
|
Yes, I'd do this super-different.Lisp code:
At this point, though, you've pretty much rolled your own ghetto free-for-all facsimile of a record inheritance structure, so use the one in your implementation instead if it's there. In R6RS I'm pretty sure it would look about like this: Lisp code:
Lisp code:
The reference is SICP, chapter 4 (it's free online). (Sorry if there are typos in the above code; I did it without a repl or my usual VIM setup because I'm feeling lazy. I hope it gets the ideas across.) EDIT: If there are less than 15 opcodes, I actually wouldn't bother writing add-opcode? and the like. I'd just write to-instr using the bit-operation queries on the left of the cond, leave a comment at each test indicating what it was doing, and dump the input in there before doing anything else with it. And if all of the constructors for the instruction records are as simple at the one in build-new-add-instr, I might not even pull those out if I was never, ever going to build a new instruction anywhere else in the program. If you're worried about recovering the opcodes, I'd write instr->opcode that reverses the process. EDIT2: The procedure you should write and use all over, though, is: Lisp code:
Lisp code:
QuantumNinja fucked around with this message at 01:45 on Jun 19, 2015 |
# ¿ Jun 19, 2015 01:36 |
|
By design, yes! Mostly, it lacks a good resource for looking up "how do I X" in the language. The wiki book is tolerable, but incomplete. Learn You a Haskell is aimed at people who don't know what a function is, and as far as I can tell Real World Haskell is aimed at C programmers trying to write bar-code scanners in Haskell Edit: the way everyone I know who has learned Haskell has done it by building something large. It turns into "How do I X" and reading Stack Overflow enough times that at the end you know the language.
|
# ¿ Jun 19, 2015 03:55 |
|
gonadic io posted:(.).(.) is real, and occasionally useful. Ah, yes, the Total Recall composition operation.
|
# ¿ Jun 25, 2015 20:48 |
|
Question about Elm: how would I go about making an Elm application interact with a database backend? The 'easiest' way I can imagine from the API is to use JSON to post requests to some server-side script and get the results back. Is there an approach (maybe as a library or tutorial) that doesn't require doing something like that for user login authentication?
|
# ¿ Jun 30, 2015 23:43 |
|
MononcQc posted:This has been the major challenge of functional programming, IMO. Most of the algorithms you'll see out there that do any form of dynamic programming will use a mutable array somewhere in there, or will make assumptions of things like O(1) hashtables. In an immutable language, O(log n) with a high branching factor is as good as it gets without special runtime support to hide that poo poo, most of the time, or possibly amortized O(1), which still ends up having a high cost here and there. The upside, of course, is that the implementation will be 150 lines shorter!
|
# ¿ Jul 21, 2015 03:08 |
|
Athas posted:As far as I know, it is still an open question whether pure functional programming is theoretically less performant than imperative programming, but it is easy to see that far less work has gone into purely functional algorithms. Pippenger tackled this subject in the mid-90s, comparing mutating lisp with non-mutating lisp. Here's the (quite short) paper: http://www.cs.princeton.edu/courses/archive/fall03/cs528/handouts/Pure%20Versus%20Impure%20LISP.pdf Pippenger posted:Of course, he's relying on specific list complexities. Applying Okasaki's techniques can crush the logarithmic factor into amortized O(1) in a lot of cases, too. Just like the sieve paper linked earlier, performant functional programming is almost always about good data structures. QuantumNinja fucked around with this message at 14:08 on Aug 3, 2015 |
# ¿ Aug 3, 2015 14:05 |
|
MononcQc posted:There's one last category which is "I want to represent circular references doubly-linked data structures the way I can with pointers", and the result for that are zippers. I'm a bit short on time for now but can expand on this if asked to. For zippers: http://learnyouahaskell.com/zippers http://strictlypositive.org/diff.pdf https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf xtal posted:What do you mean by that? I've enjoyed writing concurrent code in Clojure, Haskell and Scala much more than other languages. Immutable data structures and pure functions make for dead simple parallelism. The rebuilding process in MononcQc's last post is part of the rub. If each process has a pointer to D at the top of the tree, then how do they each learn that D has changed to the new D because a single process needed to add something? They need a shared pointer, right? And if each thread needs to put things into this tree, they all need to update not just that shared pointer, but maintain the new nodes that another tree may have generated during insertion. You end up needing a safety mechanism like an atomic lock on that reference; otherwise, process A may read node D, rebuild part of the tree, then yield to Process B, which reads node D, rebuilds part of the tree, then yields, then process A updates D to be D_A and yields, then process B updates D with D_B. D_A's updates are entirely lost because the nodes build as part of D_A were only ever contained in Process A and its update. An atomic lock on the structure will solve this, but that's a huge slowdown in the program. The other thing you can do is 'queue' updates into a single thread that has final say over everything, but even then requesting values from D means that the manager has to discharge all of the updates pending that might be relevant to looking up the value. In general, the parallelism is dead simple if it's in the form of a map or a fold: if you can discretize the work and combine it "later". If the threads all need to share and update data, though, as the computation proceeds, you're in a lot of trouble. In fact, this is the entire motivation of LVars, and they've been used to address these sorts of problems. QuantumNinja fucked around with this message at 19:27 on Aug 3, 2015 |
# ¿ Aug 3, 2015 19:20 |
|
MononcQc posted:As a non-academic who does a bunch of functional programming, I'd say I'd found a few main usages for the book: Incidentally, the same thing could be said about CLRS, and I think that's the point: Okasaki's book is about being a reference guide that lets you catch onto the pattern yourself. Treat it as such, and it's nice. Even as an academic, I think about the book this way.
|
# ¿ Aug 4, 2015 17:26 |
|
If you absolutely want to do list lookup-based showing, I think you're better off calling lookup:code:
|
# ¿ Sep 17, 2015 16:04 |
|
fart simpson posted:Elm really does need a Hoogle though. I kind of want to write one, but I'm not sure I'm up to it at the moment. It's only okay if you do it in Elm, though.
|
# ¿ Jan 11, 2016 14:51 |
|
Maluco Marinero posted:The problem with explicitly passing things down the hierarchy is that at a point you're preventing code reuse, because as per the architecture Actions are what get passed down, you could never write a nice module for Drag and Drop without boilerplate from outside to convert whatever the mouse input signal is to to the Drag and Drop's expected actions. Lenses could work.
|
# ¿ Jan 12, 2016 04:47 |
|
Pollyanna posted:You can tell that I'm not terribly satisfied with the results. Sure, it theoretically works, but I'm not sure how to test and debug it. lein repl doesn't seem to have any step-debugging or breakpoints, so I have to rely on println, which sucks butt. And in terms of application architecture, this ended up being top-down as opposed to bottom-up, because that's just how I'm used to doing things. It's a lot more straightforward to go from a full representation of the game as opposed to individually defining cards, then decks, then players, then the game. I'm also realizing that most of these functions should be marked private, outside of tick-game. One thing I've found invaluable for Scheme is trace-define, and it looks like Clojure has its own version. Look at that for debugging. The architecture is close, IMO. Here are some suggestions. I'd expect resolve-match to look more like this, though (keeping in mind I know Scheme but not Clojure so the syntax probably isn't right): code:
code:
code:
|
# ¿ Mar 12, 2016 02:08 |
|
Puella Magissima posted:I think you're right that the problems originate from the data structure. The reason I have it that way is that this is from an assignment that was originally meant to be in java until the professor changed his mind and said we could use whatever, so I just directly took the java data structure and used that. It's obvious to me now that it makes a lot more sense to have a meaningful list index when you're working with loops than with functions, so lesson learned on that front. Thanks for the feedback! The data structure isn't even the problem. You can do this just fine without explicit indicies and still get what you want because your edges are the indicies. Assuming an input that looks like: code:
code:
If you need node names for some reason (like strings or something), just zip them with numbers, build a list of number-number edges, run this, then translate them all back. gonadic io is right, though, that this should really use Data.Map (or at least a vector) because this sort of implementation is comparatively inefficient.
|
# ¿ Mar 13, 2016 19:25 |
|
Puella Magissima posted:Whoa, that's way better than what I had. I don't think it even occurred to me to recurse to the correct list instead of manually indexing, but when you put it like that it makes total sense. I guess I'm still not used to thinking functionally. I probably should have come here before spending an hour banging my head against the wall coming up with my lovely solution. Thanks a lot for your help. I'll look into Data.Map as well. gonatic io is 100% spot-on about your learning process! Maybe try some of SICP in it, or Project Euler. Another thing that can help with Haskell is HLint, which will give you suggestions for your code. Vim's Syntastic supports it and certainly made me a better Haskell programmer. Also, pedantry: one recurs to the correct list, just as one incurs debt. Recursing is an unfortunate thing that happens after you've un-cursed an object and then sat, unlucky, on a throne in Nethack
|
# ¿ Mar 14, 2016 01:27 |
|
You could always install a linux virtual machine and develop haskell on that.
|
# ¿ Mar 15, 2016 14:33 |
|
Bognar posted:Neither do I, but I use a linux VM to write Rust so I wasn't joking; that's how I do most of my development.
|
# ¿ Mar 17, 2016 01:18 |
|
Maluco Marinero posted:Elm does a pretty good job of dumbing it down for front end developers, so maybe the code examples and guides there provide some good practical examples of frp Seconded. Elm is the delicious cake of front-end development in a strongly-typed language. It's clean, easy, and powerful.
|
# ¿ Apr 21, 2016 03:29 |
|
|
# ¿ Apr 29, 2024 17:32 |
|
Athas posted:Every optimising compiler belongs in the horror thread. It's always a mess of heuristics, hacks and "good-enough"s. I disagree. Well, sort of. Which is to say, Kent Dybvig did it with one heuristic that properly-quantified "good enough". The rest of Chez Scheme (which was recently open-sourced) is actually a pretty direct process of efficient and direct syntactic rewriting. For example, here is the pass handling letrec. Outside of Chez Scheme (and other, similar compilers written with Dybvig's philosophy, like Ikarus), however, yeah, most compilers, regardless of optimizations, are a horrorshow. Rust's compiler doesn't even do massive optimizations and it's a horrorshow, at this point GCC's a black box, LLVM can't even decide which cross-compilation is available, and ICC is full of magic tricks. QuantumNinja fucked around with this message at 04:47 on Jun 4, 2016 |
# ¿ Jun 4, 2016 04:43 |