Karate Bastard posted:They do? Yeah. It's super powerful too, and is used for a lot more than unpacking values (see Jabor's post). IMO you should think of it as more of a beefed-up switch statement than an unpacking mechanism. I actually think most Haskell code gets compiled down to pattern matches internally, but I am far from an expert on Haskell compilation so don't quote me on that.
|
|
# ? Feb 7, 2015 10:08 |
|
|
# ? May 17, 2024 13:49 |
|
VikingofRock posted:I really liked Real World Haskell, Neat, it has a "We make the content freely available online. If you like it, please buy a copy" policy. That's promising. I've only ever seen awesome people do that. Jabor posted:Awesome and constructive effortpost Now that's what I'm talking about, that's how you do it! That's time spent teaching nontrivial stuff instead of taking an awesome and powerful functionality (as I'm now learning it is) and dressing it up as a gimped down error prone syntax sugar. Thanks man!
|
# ? Feb 7, 2015 10:39 |
|
Suspicious Dish posted:How so? I had it the wrong way around and was thinking about how you can disable overcommit, so a process that's using over half the available memory and tries to fork immediately commits all that memory again and welp there's not enough of it.
|
# ? Feb 7, 2015 11:39 |
|
Karate Bastard posted:Neat, it has a "We make the content freely available online. If you like it, please buy a copy" policy. That's promising. I've only ever seen awesome people do that. Bryan O'Sullivan is a pretty awesome person.
|
# ? Feb 7, 2015 11:49 |
|
If you want to learn an actual lisp I've been really enjoying Clojure for the Brave and True. Can't go wrong with a tutorial that has you sorting suspected vampires by their glitter factor. Clojure is really useful because it runs on the JVM (and some other places too), so you get access to the vast Java ecosystem without having to, you know, Java.
|
# ? Feb 7, 2015 12:01 |
|
I've heard nice things about clojure, but also that it has terrible startup time, unrelated to jvm starup times. True/false?
|
# ? Feb 7, 2015 13:41 |
|
Karate Bastard posted:I've heard nice things about clojure, but also that it has terrible startup time, unrelated to jvm starup times. True/false? Definitely true. It's less of an issue for the kind of long running services you'd typically write in clojure, but it makes the language unsuitable for quick command line utilities.
|
# ? Feb 7, 2015 14:54 |
|
Vanadium posted:I had it the wrong way around and was thinking about how you can disable overcommit, so a process that's using over half the available memory and tries to fork immediately commits all that memory again and welp there's not enough of it. No, i think you were right. If you disable COW your overcommit is going to be in pain as you're going to be *actually writing* to *all* the overcommited RAM.
|
# ? Feb 7, 2015 15:32 |
|
Karate Bastard posted:I've heard nice things about clojure, but also that it has terrible startup time, unrelated to jvm starup times. True/false? Very true. It's great for long-running processes (daemons, GUI tools) but rubbish for anything where startup time is a factor. Clojure is the only lisp I've actually gotten poo poo done with, and the tooling is fantastic, but I wish it didn't take so long to start up (and use so much memory, although that's as much a JVM thing as a clojure thing) so I could use it for more things. ToxicFrog fucked around with this message at 17:16 on Feb 7, 2015 |
# ? Feb 7, 2015 17:07 |
|
ToxicFrog posted:Very true. It's great for long-running processes (daemons, GUI tools) but rubbish for anything where startup time is a factor. I've been trying to mess around with Lisp as well, and from what I've seen SBCL is the way to go for CLIs or other "short run" things. Is that the case still, or is there something better? Most of the material I can find is from like 2012 or before.
|
# ? Feb 7, 2015 20:17 |
|
I would go with Haskell over Lisp if you just want a perspective on functional programming for pedagogical reasons. Haskell is pure compared to Lisp. This requires that you are explicit in side effects and mutability while Lisp does not. Haskell's type system will also demonstrate how powerful a good static type system can be in removing certain types of common programming errors and boilerplate. Since many popular languages have adopted features like tail call optimizations, higher order functions, lambdas, and closures, I feel like Lisp is not left with much to deliver on insight in regards to functional programming. The main thing Lisp has to demonstrate for pedagogy is meta-programming. Lisp code is essentially an abstract syntax tree. You can pass Lisp code into a Lisp function that will parse and arbitrarily manipulate this syntax tree to produce another syntax tree. This is conceptually how compilers do code transformations as far as I'm aware. The amazing thing with Lisp is how easy this is and that this is all done using Lisp code. You don't escape into a special meta-programming world with different syntax. You aren't burdened with insane unreadability of the meta-programming facilities. You can essentially trivially define your own syntax for Lisp. I really like Learn You a Haskell for Great Good. comedyblissoption fucked around with this message at 21:10 on Feb 7, 2015 |
# ? Feb 7, 2015 21:06 |
|
I wouldn't say that one second startup times make anything "unusable" for command line utilities, really. Those benchmarks are much more damning for Android apps, where a 5 second startup time would make me want to launch my device out a window, but a couple of extra seconds to parse an XML file or whatever isn't so bad. It probably took me longer to type the command in the first place. ed: He was able to get Android start times down to 1.7s by disabling some nonessential things, but that's still really bad as a mandatory cost to startup the runtime, especially on a mobile environment where you're extra sensitive to delays. Dessert Rose fucked around with this message at 21:17 on Feb 7, 2015 |
# ? Feb 7, 2015 21:13 |
|
Dessert Rose posted:I wouldn't say that one second startup times make anything "unusable" for command line utilities, really. Those benchmarks are much more damning for Android apps, where a 5 second startup time would make me want to launch my device out a window, but a couple of extra seconds to parse an XML file or whatever isn't so bad. It probably took me longer to type the command in the first place. Depends. Someone at work went crazy over a one second startup time because it was a script that was run in response to a UI button that then acted on the output to open a new window for the output. Yes it shouldn't had been a script like that in any sane universe but when startup time became 2ms the perception went away
|
# ? Feb 7, 2015 21:51 |
|
LeftistMuslimObama posted:Yeah, I just basically didn't see why fork() is useful by itself, 1. As a means of task parallelism, particularly I/O parallelism where a process spends lots of time in blocking syscalls. Its use here is mostly archaic with newer/better mechanisms like threads and asynchronous I/O, but was still a common approach through the 90s. 2. As a means of providing process supervision and/or sandboxing. Daemons sometimes run supervisor processes to catch crashes of child processes, or to facilitate privileged operations of sandboxed children. This use is starting to go (partially) away as more powerful init systems are becoming common (launchd, upstart, systemd, etc.). However it's still useful in sandboxing where it's faster to fork off a seed process than it is to create runtime environments from scratch (e.g., Android's zygote). 3. Combined with exec, to provide a simple two-step kernel abstraction to facilitate a crapload number of ways to "spawn" other programs. LeftistMuslimObama posted:but why fork()+exec() instead of just something like spawnProcess(<function pointer>,<parameters>)? Now, there's a lot of details involved in spawning a new program. Some of these, like where you locate the code, stack, and parameters, are defined as part of an ABI that the kernel and userspace agree on. However, other of these, like what files should be open, where should fds point to, what user and process group should the program run under, should any resource limits be set, what signals should be ignored, etc., are conventions of userspace and even change depending on what's spawning what. fork+exec allow all these things to be configured as part of spawning a child program without having to create a massive CreateProcess interface to specify all these things. Another alternative is to spawn children in a paused state, to allow the parent process to configure the children before launching them. My guess is that, that approach may have been considered, but the combination of fork+exec could handle that case without having to introduce a set of "change this behavior about this other process" syscalls. ExcessBLarg! fucked around with this message at 04:55 on Feb 8, 2015 |
# ? Feb 8, 2015 04:51 |
|
CreateProcess takes 10 arguments. Two of them are structs with three fields each. One is a bitmask with 16 possible values. One is a struct with 19 fields. fork()+exec() is a 0-arg function and a a 1+ arg function that give you equivalent functionality.
|
# ? Feb 8, 2015 05:43 |
|
Except for the giant list of pitfalls where the tradeoff isn't as great.
|
# ? Feb 8, 2015 06:00 |
|
Preaching the virtues of the simplicity of fork()+exec() falls flat when basic things like race-free close-on-exec file descriptors were first introduced a couple years ago.
|
# ? Feb 8, 2015 06:11 |
|
I feel like I learn more from reading this thread than I do from my lecturers. Everything ExcessBLarg! pointed out makes perfect sense to me, but my university classes just don't seem to want to facilitate this sort of discussion at all. They're content to just tell us "X is the canonical way to program Y functionality, now here's a project where you reimplement this solved problem but doubtlessly with much worse code." I don't feel like I'm getting the actual nuts-and-bolts of the machine, just a snapshot of the machine itself. The lecture about fork never really branched (hah) out into any discussion of why it was designed that way, what sorts of considerations we might make when designing our own OS, or anything like that. He just explained what fork did and said it was elegant, and I was left thinking "Wait, did you just read me a manpage? This is supposed to be an OS design and programming class, not a "here's what Unix does" class."
|
# ? Feb 8, 2015 06:35 |
|
Suspicious Dish posted:Except for the giant list of pitfalls where the tradeoff isn't as great. There's certainly things to be said for CreateProcess. It just isn't as obviously simpler than the more roundabout fork+exec as many seem to expect it to be.
|
# ? Feb 8, 2015 06:38 |
|
LeftistMuslimObama posted:This is supposed to be an OS design and programming class, not a "here's what Unix does" class." If you're "lucky" the first might also include some snide comments that acknowledge the existence of Windows, but it still won't mention any of the interesting things that NT does which aren't commonly seen in unixen.
|
# ? Feb 8, 2015 06:41 |
|
Plorkyeran posted:If you're "lucky" the first might also include some snide comments that acknowledge the existence of Windows, but it still won't mention any of the interesting things that NT does which aren't commonly seen in unixen. For instance, CreateProcess() is implemented entirely in userspace built on kernel primitives like NtCreateRemoteThread() and NtMapViewOfSection().
|
# ? Feb 8, 2015 06:50 |
|
Plorkyeran posted:There's certainly things to be said for CreateProcess. It just isn't as obviously simpler than the more roundabout fork+exec as many seem to expect it to be. It's not simple, no, but it solves a lot of the race conditions and other ugliness of fork/exec. I prefer an ugly API that always gets things right over a seductively beautiful one with hard, fundamentally unsolvable and tricky issues.
|
# ? Feb 8, 2015 07:02 |
|
They're not unsolvable, I've mentioned a solution to them twice already.
|
# ? Feb 8, 2015 07:15 |
|
LeftistMuslimObama posted:I don't feel like I'm getting the actual nuts-and-bolts of the machine, just a snapshot of the machine itself. The lecture about fork never really branched (hah) out into any discussion of why it was designed that way, what sorts of considerations we might make when designing our own OS, or anything like that. There's also a bias towards Unix in academia. It's a combination of its age, prevalence, academic roots (BSD was a graduate research project), availability of source code, and vendor neutrality. But it's true, it's not the only system out there, an comparing Unix to how VMS did or NT does things can be enlightening.
|
# ? Feb 8, 2015 07:53 |
|
Spatial posted:Someone's got a vfork() up their rear end in a top hat http://i.imgur.com/JV5cpXs.jpg (from the Schadenfreude thread.)
|
# ? Feb 8, 2015 14:37 |
|
The amusing thing about this "do the right thing" discussion is that Unix is the classic example of the Worse is Better design philosophy, and fork() sounds like a great example. It's a simple API and a simple concept. Rather than doing the right thing, it goes for "simple" and simply doesn't worry about the problems it causes. The caller can deal with those on their own.
|
# ? Feb 8, 2015 15:44 |
|
Dessert Rose posted:I wouldn't say that one second startup times make anything "unusable" for command line utilities, really. Those benchmarks are much more damning for Android apps, where a 5 second startup time would make me want to launch my device out a window, but a couple of extra seconds to parse an XML file or whatever isn't so bad. It probably took me longer to type the command in the first place. The machine those benchmarks were taken on was much faster than the server I'm usually running Clojure code on. Yeah, it depends on the tool. If it's a package manager or something, disk and network IO from the actual package management is going to dominate. But it means I can't write short scripts in clojure, or sprinkle it into my bash scripts the way I would awk or lua, because it takes the runtime from a few milliseconds to multiple seconds.
|
# ? Feb 8, 2015 17:03 |
|
sarehu posted:They're not unsolvable, I've mentioned a solution to them twice already. You can't malloc in between fork and exec. By loving definition, that's pure insanity and a really stupid solution to a really common problem. One that people repeatedly get wrong. It's the definition of "pit of failure".
|
# ? Feb 8, 2015 19:31 |
|
Xenoveritas posted:The amusing thing about this "do the right thing" discussion is that Unix is the classic example of the Worse is Better design philosophy, and fork() sounds like a great example. It's a simple API and a simple concept. Rather than doing the right thing, it goes for "simple" and simply doesn't worry about the problems it causes. The caller can deal with those on their own. fork() actually is older than Unix (it came from a system called GENIE).
|
# ? Feb 8, 2015 19:41 |
|
Ender.uNF posted:You can't malloc in between fork and exec. So, fork gets a lot more challenging in multithreaded programs. fork+exec still works fine, although some care is needed in the event exec fails. Even in multithreaded programs the restrictions on child process code are the same as those on signal handlers. They're inconvenient, but it's still workable. My recommendation, since you already have threading as a means to achieve parallelism, is to avoid using fork after spawning secondary threads, except for fork+exec. Part of the problem is that threading support is a total bolt-on in Unix systems. Linux didn't have decent POSIX threads support until Linux 2.6, (and as previously noted) things like the close-on-exec race issue persisted for some time after. Bashing in threading support also disrupted far more APIs and conventions than just fork.
|
# ? Feb 8, 2015 20:24 |
Isn't the conclusion just the obvious, either you use fork() for multiprocessing, or you use pthreads, but never both. And if you do use fork() then you drat better avoid depending on any multithreaded libraries.
|
|
# ? Feb 9, 2015 05:03 |
|
nielsm posted:Isn't the conclusion just the obvious, either you use fork() for multiprocessing, or you use pthreads, but never both. And your libraries better tell you whether or not they're multithreaded and the new versions better not suddenly become threaded.
|
# ? Feb 9, 2015 05:08 |
|
How do you execute another process without fork()? I mean, say your app needs to use some external tool to do a part of its job.
|
# ? Feb 9, 2015 09:42 |
|
pseudorandom name posted:And your libraries better tell you whether or not they're multithreaded and the new versions better not suddenly become threaded. This doesn't seem like an unacceptable requirement to me. Threads are a bit of a complicated thing (not just due to fork() issues), so I don't think they should be considered an implementation detail.
|
# ? Feb 9, 2015 16:17 |
|
Athas posted:This doesn't seem like an unacceptable requirement to me. Threads are a bit of a complicated thing (not just due to fork() issues), so I don't think they should be considered an implementation detail. In sane languages, libraries are capable of using threads completely safely, as an implementation detail that users of the library don't actually have to worry about.
|
# ? Feb 9, 2015 16:34 |
|
It's not a language issue. No language on Unix can paper over the fork() damage, and C on other OSes can create processes like a grownup.
|
# ? Feb 9, 2015 16:39 |
|
Sane languages don't expose fork().
|
# ? Feb 9, 2015 17:52 |
|
nielsm posted:Isn't the conclusion just the obvious, either you use fork() for multiprocessing, or you use pthreads, but never both. pseudorandom name posted:And your libraries better tell you whether or not they're multithreaded and the new versions better not suddenly become threaded. You just don't, do that.
|
# ? Feb 9, 2015 18:16 |
|
Jabor posted:In sane languages, libraries are capable of using threads completely safely, as an implementation detail that users of the library don't actually have to worry about. Problems that come with threading and signals, like genuine concurrency and reentrancy, are systems problems and not an artifact of the language alone. C, which has very little of a runtime doesn't mask these at all and so "foists" the problems on the programmer. Managed languages tend to hide these to the benefit of the programmer, but which otherwise may have consequence in being able to write low level systems code. ExcessBLarg! fucked around with this message at 18:25 on Feb 9, 2015 |
# ? Feb 9, 2015 18:22 |
|
|
# ? May 17, 2024 13:49 |
|
Python exposes fork, heavily uses it (multiprocessing), also exposes threads, and can use threads behind your back, and also exposes libraries incompatible with fork (ctypes).
|
# ? Feb 9, 2015 18:41 |