Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
Template Haskell is, as a colleague put it, "the simplest possible thing that could work". It's got a horrible notion of hygiene, a bad interface, and a frankly unfortunate usage pattern. The paper is also a jumbled mess of "but it's like C++ templates", instead of just admitting that they really want syntax-case but don't want to try to get that together and working.

Adbot
ADBOT LOVES YOU

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

While your analogy is nice, and the types are more or less on point, this explanation evades the real reason why main doesn't have any arguments: the STG machine needs a starting configuration, and doing so while abstracting over input makes a worse abstract machine. Setting up an STG machine with a main with no arguments makes a lot of sense, because you just build everything into the heap and then you can formally define "invoke main with no arguments" as the start configuration for each machine, uniformly. Otherwise, any theoretical thing you want to do with main in the STG machine requires that you have an existential quantification of the input of main, so that you can say 'there exists some input i such that main {i} runs to completion'. This formalism doesn't take care of additional arguments, either, and that means that i cannot firmly be a value---it has to be some constructor. So now your machine went from "run main and watch it tick" to "pick some constructor that your main will immediately match on to extract I/O information like the initial arguments, and then apply it as main {C x1 x2 ... xn}". Unfortunately, even that doesn't alleviate the headache that your proofs are all going to look like, "There exists C, x_1, ..., x_n, such that main {C x1 x2 ... xn} [does whatever you're trying to prove]".

As a result, and since Haskell was built up from the STG machine, piece by piece, main takes no input and produces no output. Pushing main into the IO monad doesn't break this formalism, because the IO abstractions are just additional functions that live in the global environment of the STG machine as "build-ins" that behave correctly by the time you get down to the STG machine, but rewriting main to explicitly take or return things ruins the underlying abstract model.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Barnyard Protein posted:

Scheme question, what is the 'right' way to associate a unique function with an instance of a record type?

I'm decoding opcodes. The process of decoding which instruction the opcode refers to has a regular structure for which I've defined a record type. My question refers to how to then parse the fields. My current solution is to add a lambda expression to the instance that is responsible for determining if a opcode can be decoded by that instruction and parsing out the fields.

...

but i still think its kinda stupid, i'm getting that "there is probably a better way to do this that i don't know about" feeling

The way you've written this up, I'm not 100% certain what you're exactly trying to do. There isn't enough detail here for me to piece out what make-instructio actually does, and what the procedure it takes is expected to do. Also, why does the lambda take an opcode if it's tied to a particular instance of a particular record? Couldn't it just, you know, know its opcode?

I would have expected code that looked like:
code:
((instruction-decode-op record) record)
That is, package up the decoder with the record instance when you create it. Then the constructor is going to be something like
code:
(make-instruction op ... (find-instruction-decode-op op))
Then, obviously, build a wrapper around that so you never had to write it again.

This approach, is, though, pretty OO-like. I would be was lazier about this, and probably just write something like:
code:
(decode-instruction record)

(define decode-instruction
  (lambda (instr)
    (let ((op (instr-opcode instr)))
      (cond
        [(eq? op 'add) ...]
        [(eq? op 'fn) ...]))))
That's how I've always done it when writing compilers and interpreters, abstracting into smaller helpers when possible (like [(if (memq op binops)) (decode-binop-instruction instr)] as a line of the cond). I'd seriously try writing it as a Friedman-style interpreter with records first, and then see what abstraction you really want.

QuantumNinja fucked around with this message at 23:00 on Jun 18, 2015

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
Yes, I'd do this super-different.

Lisp code:
(define make-add-instruction
  (lambda (opcode)
    (make-instruction 'add
                      'add-x-to-y
                      #xB40000
                      #xFF8000
                      (let ([b-value (bitwise-arithmetic-shift (bitwise-and #x004000 opcode) -14)]
                            [d-value (bitwise-arithmetic-shift (bitwise-and #x002000 opcode) -13)]
                            [f-value (bitwise-and #x001FFF opcode)])
                        `((b . ,b-value) (d . ,d-value) (f . ,f-value))))))

(define add-instr? (lambda (instr) (eq? (instr-type 'add))))

(define add-opcode? (lambda (opcode) ... )) ;; whatever masking op you need here to say yes or no

(define to-instr
  (lambda (opcode)
  (cond
    [(add-opcode? opcode) (make-add-instruction opcode)]
    ...)))

(let* ([some-opcode #xB44123]
       [some-instr (to-instr some-opcode)])
  (display (add-opcode? some-opcode))                  ;; => #t
  (display (add-instr? some-instr))                    ;; => #t
  (display ((instruction-opcode-fields) some-instr)))) ;; => ((b . 1) (d . 0) (f . 291))
The upshot of this is that you can just map to-instr over your list of opcodes and convert them into a list of records that you can then work over. Things like add-instr? and the like will help you write a single cond to dispatch and handle every sort of opcode, and you can use anything written that way in a map, too!

At this point, though, you've pretty much rolled your own ghetto free-for-all facsimile of a record inheritance structure, so use the one in your implementation instead if it's there. In R6RS I'm pretty sure it would look about like this:
Lisp code:
(define-record add-instruction instruction (b d f))
Then you'll write this:
Lisp code:
(define build-new-add-instr
  (lambda (opcode)
    (let ([b-value (bitwise-arithmetic-shift (bitwise-and #x004000 opcode) -14)]
          [d-value (bitwise-arithmetic-shift (bitwise-and #x002000 opcode) -13)]
          [f-value (bitwise-and #x001FFF opcode)])
      (make-add-instruction 'add 'add-x-to-y #xB40000 #xFF8000 b d f))))
The upshot is that then you get add-instruction? for free instead of having to write it, and so dealing with opcodes can just be a large cond that uses add-instruction?, jump-instruction?, ...

The reference is SICP, chapter 4 (it's free online).

(Sorry if there are typos in the above code; I did it without a repl or my usual VIM setup because I'm feeling lazy. I hope it gets the ideas across.)

EDIT: If there are less than 15 opcodes, I actually wouldn't bother writing add-opcode? and the like. I'd just write to-instr using the bit-operation queries on the left of the cond, leave a comment at each test indicating what it was doing, and dump the input in there before doing anything else with it. And if all of the constructors for the instruction records are as simple at the one in build-new-add-instr, I might not even pull those out if I was never, ever going to build a new instruction anywhere else in the program. If you're worried about recovering the opcodes, I'd write instr->opcode that reverses the process.

EDIT2: The procedure you should write and use all over, though, is:

Lisp code:
(define mask-extraction
  (lambda (input field shift)
    (bitwise-arithmetic-shift (bitwise-and field input) shift)))
Then your RHSs are all like:
Lisp code:
(let ([b-value (mask-extraction opcode #x004000 -14)]
      [d-value (mask-extraction opcode #x002000 -13)]
      [f-value (bitwise-and #x001FFF opcode)])
  (make-add-instruction 'add 'add-x-to-y #xB40000 #xFF8000 b d f))

QuantumNinja fucked around with this message at 01:45 on Jun 19, 2015

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
By design, yes!

Mostly, it lacks a good resource for looking up "how do I X" in the language. The wiki book is tolerable, but incomplete. Learn You a Haskell is aimed at people who don't know what a function is, and as far as I can tell Real World Haskell is aimed at C programmers trying to write bar-code scanners in Haskell

Edit: the way everyone I know who has learned Haskell has done it by building something large. It turns into "How do I X" and reading Stack Overflow enough times that at the end you know the language.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

gonadic io posted:

(.).(.) is real, and occasionally useful.

Ah, yes, the Total Recall composition operation.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
Question about Elm: how would I go about making an Elm application interact with a database backend? The 'easiest' way I can imagine from the API is to use JSON to post requests to some server-side script and get the results back. Is there an approach (maybe as a library or tutorial) that doesn't require doing something like that for user login authentication?

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

MononcQc posted:

This has been the major challenge of functional programming, IMO. Most of the algorithms you'll see out there that do any form of dynamic programming will use a mutable array somewhere in there, or will make assumptions of things like O(1) hashtables. In an immutable language, O(log n) with a high branching factor is as good as it gets without special runtime support to hide that poo poo, most of the time, or possibly amortized O(1), which still ends up having a high cost here and there.

Almost every time you want to use an algorithm you know, you end up having to find a clever way to transform it to work a bit differently (or to invent your own variation of it) so that it meshes well with the stuff supported by your language and its constraints.

The upside, of course, is that the implementation will be 150 lines shorter! :boom:

:saddowns:

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Athas posted:

As far as I know, it is still an open question whether pure functional programming is theoretically less performant than imperative programming, but it is easy to see that far less work has gone into purely functional algorithms.

Pippenger tackled this subject in the mid-90s, comparing mutating lisp with non-mutating lisp. Here's the (quite short) paper: http://www.cs.princeton.edu/courses/archive/fall03/cs528/handouts/Pure%20Versus%20Impure%20LISP.pdf

Pippenger posted:



Of course, he's relying on specific list complexities. Applying Okasaki's techniques can crush the logarithmic factor into amortized O(1) in a lot of cases, too. Just like the sieve paper linked earlier, performant functional programming is almost always about good data structures.

QuantumNinja fucked around with this message at 14:08 on Aug 3, 2015

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

MononcQc posted:

There's one last category which is "I want to represent circular references doubly-linked data structures the way I can with pointers", and the result for that are zippers. I'm a bit short on time for now but can expand on this if asked to.

For zippers:
http://learnyouahaskell.com/zippers
http://strictlypositive.org/diff.pdf
https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf

xtal posted:

What do you mean by that? I've enjoyed writing concurrent code in Clojure, Haskell and Scala much more than other languages. Immutable data structures and pure functions make for dead simple parallelism.

In any case, I'm starting to see that the right way to do concurrency is multiple processes communicating through a message broker.

The rebuilding process in MononcQc's last post is part of the rub. If each process has a pointer to D at the top of the tree, then how do they each learn that D has changed to the new D because a single process needed to add something? They need a shared pointer, right? And if each thread needs to put things into this tree, they all need to update not just that shared pointer, but maintain the new nodes that another tree may have generated during insertion. You end up needing a safety mechanism like an atomic lock on that reference; otherwise, process A may read node D, rebuild part of the tree, then yield to Process B, which reads node D, rebuilds part of the tree, then yields, then process A updates D to be D_A and yields, then process B updates D with D_B. D_A's updates are entirely lost because the nodes build as part of D_A were only ever contained in Process A and its update. An atomic lock on the structure will solve this, but that's a huge slowdown in the program. The other thing you can do is 'queue' updates into a single thread that has final say over everything, but even then requesting values from D means that the manager has to discharge all of the updates pending that might be relevant to looking up the value.

In general, the parallelism is dead simple if it's in the form of a map or a fold: if you can discretize the work and combine it "later". If the threads all need to share and update data, though, as the computation proceeds, you're in a lot of trouble. In fact, this is the entire motivation of LVars, and they've been used to address these sorts of problems.

QuantumNinja fucked around with this message at 19:27 on Aug 3, 2015

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

MononcQc posted:

As a non-academic who does a bunch of functional programming, I'd say I'd found a few main usages for the book:

- a dictionary of data structures I can steal and reimplement
- a way to see how I could rig my own data structures and think about their performance -- the book has trick in things like how or when to rebalance
- a bunch of tricks that can be readapted from one datastructure to the next and gotchas about use cases (it is possibly faster to do a negative lookup ['thing doesn't exist'] in a trie than a hash)

The big analysis bit is useful to wrap your head around, but being able to replicate it has not proven essential to me once you get a kind of intuitive feel for things you do.

Incidentally, the same thing could be said about CLRS, and I think that's the point: Okasaki's book is about being a reference guide that lets you catch onto the pattern yourself. Treat it as such, and it's nice. Even as an academic, I think about the book this way.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
If you absolutely want to do list lookup-based showing, I think you're better off calling lookup:

code:
module Days where

import Data.Maybe

data Day = Mon | Tue | Wed | Thu | Fri | Sat | Sun
  deriving (Eq)

dayStrings :: [(Day, String)]
dayStrings = [ (Mon, "Monday")
             , (Tue, "Tuesday")
             , (Wed, "Wednesday")
             , (Thu, "Thursday")
             , (Fri, "Friday")
             , (Sat, "Saturday")
             , (Sun, "Sunday")
             ]

instance (Show Day) where
  show d = fromMaybe "" $ lookup d dayStrings
At the point where you've written this code, however, it's pretty drat clear that you should have just directly written the seven LHS/RHS pairs of the Show instance.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

fart simpson posted:

Elm really does need a Hoogle though. I kind of want to write one, but I'm not sure I'm up to it at the moment.

It's only okay if you do it in Elm, though.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Maluco Marinero posted:

The problem with explicitly passing things down the hierarchy is that at a point you're preventing code reuse, because as per the architecture Actions are what get passed down, you could never write a nice module for Drag and Drop without boilerplate from outside to convert whatever the mouse input signal is to to the Drag and Drop's expected actions.

Lenses could work.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Pollyanna posted:

You can tell that I'm not terribly satisfied with the results. Sure, it theoretically works, but I'm not sure how to test and debug it. lein repl doesn't seem to have any step-debugging or breakpoints, so I have to rely on println, which sucks butt. And in terms of application architecture, this ended up being top-down as opposed to bottom-up, because that's just how I'm used to doing things. It's a lot more straightforward to go from a full representation of the game as opposed to individually defining cards, then decks, then players, then the game. I'm also realizing that most of these functions should be marked private, outside of tick-game.

Plus, some of them are just plain awful. award-winnings and resolve-match are particularly ugly, and I don't feel like I understand Clojure (or Lisp in general) well enough to write these functions right the first time.

What am I missing here? Did I take a sensible approach to developing this program? What should my workflow be? What do I need to learn to be good at this language, and make things that are better than just endlessly nested functions on a data structure or two? I see people putting together loving, like, graphs and nodes and channels and systems everywhere, and I have no idea how I'd do that in Ruby, let alone Clojure. :psyduck:

One thing I've found invaluable for Scheme is trace-define, and it looks like Clojure has its own version. Look at that for debugging.

The architecture is close, IMO. Here are some suggestions.

I'd expect resolve-match to look more like this, though (keeping in mind I know Scheme but not Clojure so the syntax probably isn't right):

code:
(defn resolve-match [game]
  (let [(cards game) (next-set game)] # hands back the cards and removes them from the player's decks, shadowing the previous binding of game
    (cond
      (evenly-matched? cards) (add-p1-spoils (list (first cards)) (add-p2-spoils (second cards) game))
      (p1-wins? cards)        (add-p1-spoils cards game)
      else                    (add-p2-spoils cards game))))
Then add-x-spoils does the obvious thing. Bonus points if you write add-spoils that takes either :p1 or :p2 as an argument. Then the main loop is something like:

code:
(defn play-war [{:keys [p1 p2] :as game}]
    (if (or (no-cards? p1) (no-cards? p2))
      (let [game (play-war (claim-spoils game))]
        (if (or (no-cards? p1) (no-cards? p2))
             (set-game-state :game-over game)
             (play-war game))
      (play-war (resolve-match game)))
You can get rid of the need for both recursive calls to play-war by adding a game state option that represents that you've already reshuffled and one of the players is still cardless, like:

code:
(defn play-war [{:keys [p1 p2 game-state] :as game}]
    (let [shuffle? (or (no-cards? p1) (no-cards? p2))]
       (if (and shuffle? (eq? game-state :init-game))) # we just started over, and one player has no cards
           (set-game-state :game-over game)
           (play-game (if shuffle? (claim-spoils game) game))))# now claim-spoils also resets the game state to init-state probably
That's what I'd do, anyway, as a Scheme programmer. Maybe wait for someone who does Clojure here more to weigh in.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Puella Magissima posted:

I think you're right that the problems originate from the data structure. The reason I have it that way is that this is from an assignment that was originally meant to be in java until the professor changed his mind and said we could use whatever, so I just directly took the java data structure and used that. It's obvious to me now that it makes a lot more sense to have a meaningful list index when you're working with loops than with functions, so lesson learned on that front. Thanks for the feedback!

The data structure isn't even the problem. You can do this just fine without explicit indicies and still get what you want because your edges are the indicies. Assuming an input that looks like:

code:
5
1 2
2 4
3 4
this code yields the same output as the original code you posted:

code:
getGraphFromFile :: String -> [[Int]]
getGraphFromFile text = result
  where
    ([vertcount]:edges) = (map (map read . words) $ lines text) :: [[Int]]
    empties             = replicate vertcount [] 
    addEdge value       = updateWith ((:) value)
    result              = map (nub . sort) $ foldr (\ [v1, v2] -> addEdge v2 v1 . addEdge v1 v2)
                                                   empties edges
    updateWith :: (a -> a) -> Int -> [a] -> [a]
    updateWith f 0 (y:ys) = f y : ys
    updateWith f n []     = []
    updateWith f n (y:ys) = y : updateWith f (n - 1) ys 
You don't need the indicies at any point, and updateWith does the bulk of the work.

If you need node names for some reason (like strings or something), just zip them with numbers, build a list of number-number edges, run this, then translate them all back.

gonadic io is right, though, that this should really use Data.Map (or at least a vector) because this sort of implementation is comparatively inefficient.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Puella Magissima posted:

Whoa, that's way better than what I had. I don't think it even occurred to me to recurse to the correct list instead of manually indexing, but when you put it like that it makes total sense. I guess I'm still not used to thinking functionally. I probably should have come here before spending an hour banging my head against the wall coming up with my lovely solution. Thanks a lot for your help. I'll look into Data.Map as well.

gonatic io is 100% spot-on about your learning process! Maybe try some of SICP in it, or Project Euler.

Another thing that can help with Haskell is HLint, which will give you suggestions for your code. Vim's Syntastic supports it and certainly made me a better Haskell programmer.

Also, pedantry: one recurs to the correct list, just as one incurs debt. Recursing is an unfortunate thing that happens after you've un-cursed an object and then sat, unlucky, on a throne in Nethack :v:

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
You could always install a linux virtual machine and develop haskell on that. :v:

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Bognar posted:

Neither do I, but I use a linux VM to write Rust so :shrug:

I wasn't joking; that's how I do most of my development.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Maluco Marinero posted:

Elm does a pretty good job of dumbing it down for front end developers, so maybe the code examples and guides there provide some good practical examples of frp

Seconded. Elm is the delicious cake of front-end development in a strongly-typed language. It's clean, easy, and powerful.

Adbot
ADBOT LOVES YOU

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Athas posted:

Every optimising compiler belongs in the horror thread. It's always a mess of heuristics, hacks and "good-enough"s.

I disagree. Well, sort of. Which is to say, Kent Dybvig did it with one heuristic that properly-quantified "good enough". The rest of Chez Scheme (which was recently open-sourced) is actually a pretty direct process of efficient and direct syntactic rewriting. For example, here is the pass handling letrec.

Outside of Chez Scheme (and other, similar compilers written with Dybvig's philosophy, like Ikarus), however, yeah, most compilers, regardless of optimizations, are a horrorshow. Rust's compiler doesn't even do massive optimizations and it's a horrorshow, at this point GCC's a black box, LLVM can't even decide which cross-compilation is available, and ICC is full of magic tricks.

QuantumNinja fucked around with this message at 04:47 on Jun 4, 2016

  • Locked thread