|
i want to try f# some day i guess i can hold my nose and ignore the terribleness of the .net standard library
|
# ? Apr 12, 2017 15:17 |
|
|
# ? May 22, 2024 09:36 |
|
gonadic io posted:
Why does it think the block is an argument to foo? foo is clearly being called with no arguments.
|
# ? Apr 12, 2017 15:52 |
|
Doom Mathematic posted:Why does it think the block is an argument to foo? foo is clearly being called with no arguments. Currying. Foo() () or foo() {} are both perfectly valid scala syntax if you have def foo() () =...
|
# ? Apr 12, 2017 16:28 |
|
Sapozhnik posted:i want to try f# some day actually .net has the best stdlib
|
# ? Apr 12, 2017 16:42 |
|
gonadic io posted:i wanted to use blocks to scope some code so i could have variables with the same name did you now
|
# ? Apr 12, 2017 17:01 |
|
gonadic io posted:i wanted to use blocks to scope some code so i could have variables with the same name i think the fault's not in your kleene stars here
|
# ? Apr 12, 2017 18:36 |
|
gently caress mutability
|
# ? Apr 12, 2017 18:38 |
|
gonadic io posted:gently caress mutability
|
# ? Apr 12, 2017 18:39 |
|
gonadic io posted:gently caress mutability
|
# ? Apr 12, 2017 18:46 |
|
gonadic io posted:gently caress mutability
|
# ? Apr 12, 2017 18:48 |
|
gonadic io posted:Muck futability
|
# ? Apr 12, 2017 18:50 |
|
so when you pass a giant blob of data off to a function and it returns a BRAND NEW object with w/e 1-bit change you wanted, is there any acknowledgment of the runtime actually copying and shuffling all this data around behind the scenes to keep up the "immutable" fiction? or is that like ladder step 4: why is my performance poo poo
|
# ? Apr 12, 2017 19:11 |
|
mute fuckability?
|
# ? Apr 12, 2017 19:10 |
|
JawnV6 posted:so when you pass a giant blob of data off to a function and it returns a BRAND NEW object with w/e 1-bit change you wanted, is there any acknowledgment of the runtime actually copying and shuffling all this data around behind the scenes to keep up the "immutable" fiction? or is that like ladder step 4: why is my performance poo poo you talking about copy on write? depends on the lang i guess but idk compare addresses or introspect the data or something
|
# ? Apr 12, 2017 19:11 |
|
JawnV6 posted:so when you pass a giant blob of data off to a function and it returns a BRAND NEW object with w/e 1-bit change you wanted, is there any acknowledgment of the runtime actually copying and shuffling all this data around behind the scenes to keep up the "immutable" fiction? or is that like ladder step 4: why is my performance poo poo usually you'd get back an object which contains the big blob plus a small notice to remember that one bit changed, though with a fair bit moe intelligence as you'd supposedly be using some higher-level data structure where this may be represented. either way immutable data structure tend to be all about amortized performance and lazily doing merges of many small changes into a new thing it is not the be-all end-all, but it works for a lot of stuff
|
# ? Apr 12, 2017 19:18 |
|
JawnV6 posted:so when you pass a giant blob of data off to a function and it returns a BRAND NEW object with w/e 1-bit change you wanted, is there any acknowledgment of the runtime actually copying and shuffling all this data around behind the scenes to keep up the "immutable" fiction? or is that like ladder step 4: why is my performance poo poo you pass that stuff to a C escape hatch and pretend all is well. Otherwise functional data is usually organized in some form of tree that branches to various degrees so that all your modifications take O(log n) complexity and you get a new tree root with most of the branches and leaves shared with the original one while also pointing at the new log n nodes you needed to modify yours. Garbage collection is assumed to be there and is essential to reap the remnants of old trees to remember your program's state by. You can then put on your computer scientist and functional weenie hats (the latter reads "Make functional programming Such is the Okasaki approach to functional data structure argumentation. MononcQc fucked around with this message at 19:26 on Apr 12, 2017 |
# ? Apr 12, 2017 19:23 |
|
Sapozhnik posted:i want to try f# some day the .net std lib is really good unless you're talking about .net core, in which case i think it's not as good
|
# ? Apr 12, 2017 21:46 |
|
MononcQc posted:you pass that stuff to a C escape hatch and pretend all is well. i work on embedded runtimes for C folks, standing athwart hardware yelling Stop at dynamic memory allocation like i KNOW y'all aren't running out of write-once NV, there's probably mutable RAM in there that machinery is hiding from the FP weenie side of things, but didn't have a clear picture of how that abstraction was maintained
|
# ? Apr 13, 2017 00:00 |
|
JawnV6 posted:that makes a lot of sense The standard image explaining it is this: when any reference to xs is gone, it can finally be GC'd, and then similarly for all the data it alone references. Interestingly enough, when you have immutable data structures, the GC algorithm becomes a lot simpler, and in many cases, can be a lot more efficient because it can make more assumptions about data without risks of breaking anything.
|
# ? Apr 13, 2017 00:04 |
|
it's basically the same way Git handles its object store - a DAG of objects, where changes only propagate as far down the tree as they need to so that most of the tree can be reused on each commit. its gc even works the same.
|
# ? Apr 13, 2017 00:10 |
|
also 95% of the code i write isn't repeatedly mutating big objects in tight loops
|
# ? Apr 13, 2017 00:33 |
|
the main problems I have with f# are 1. it seriously uses singly linked lists as its default list type, probably so fp nerds can write head :: tail and feel good about themselves and 2. the f# standard lib and .net standard lib kinda chafe against each other in their conventions
|
# ? Apr 13, 2017 00:41 |
|
Also while we're at it, the underlying memory model is fun to dig into, too. In the case of Erlang, the VM organizes memory into multiple allocators: Each scheduler of the VM, on each core, owns its own version of these allocators, which is kind of layered. sys_alloc and mseg_alloc are used to interact with the OS and carry all the memory on their own. The 9 sub-allocators are used for different types of memory in the VM:
Each of these sub-allocators will request memory from mseg_alloc and sys_alloc depending on the use case, and in two possible ways. The first way is to act as a multiblock carrier (mbcs), which will fetch chunks of memory that will be used for many Erlang terms at once. For each mbc, the VM will set aside a given amount of memory (about 8MB by default in our case, which can be configured by tweaking VM options), and each term allocated will be free to go look into the many multiblock carriers to find some decent space in which to reside. Whenever the item to be allocated is greater than the single block carrier threshold (sbct), the allocator switches this allocation into a single block carrier (sbcs). A single block carrier will request memory directly from mseg_alloc for the first mmsbc (a configurable counter) entries, and then switch over to sys_alloc and store the term there until it’s deallocated. Whenever a multiblock carrier (or the first mmsbc single block carriers) can be reclaimed, mseg_alloc will try to keep it in memory for a while so that the next allocation spike that hits your VM can use pre-allocated memory rather than needing to ask the system for more each time. This may look a bit like this: You then need to know the different memory allocation strategies of the Erlang virtual machine.
This helps give some interesting approaches to data allocation, compaction, and cache reuse
|
# ? Apr 13, 2017 00:57 |
|
MononcQc posted:You can then put on your computer scientist and functional weenie hats (the latter reads "Make functional programming i giggled at this
|
# ? Apr 13, 2017 00:59 |
|
MononcQc posted:The standard image explaining it is this: fleshweasel posted:1. it seriously uses singly linked lists as its default list type, probably so fp nerds can write head :: tail and feel good about themselves and If I understand the diagram correctly, with a singly-linked list, you can modify the head with no real penalty, but modifying the last entry in the list involves creating an entirely new list. So, if you had a doubly-linked list, it wouldn't be possible to make any modifications to the list without creating an entirely new list. Maybe that's the rationale? I suppose there are other list implementations which could have been used, I'm not so hot on data structures though.
|
# ? Apr 13, 2017 01:18 |
|
Doom Mathematic posted:If I understand the diagram correctly, with a singly-linked list, you can modify the head with no real penalty, but modifying the last entry in the list involves creating an entirely new list. The way to make a doubly linked list (or a ring buffer) is to use two lists as a zipper. The list [1,2,3,4,5,6,7,8,9,10] gets to actually be represented (and iterated) as: [] [1,2,3,4,5,6,7,8,9,10] [1] [2,3,4,5,6,7,8,9,10] [2,1] [3,4,5,6,7,8,9,10] [3,2,1] [4,5,6,7,8,9,10] [4,3,2,1] [5,6,7,8,9,10] ... [1,2,3,4,5,6,7,8,9,10] [] You walk around the list by maintaining buffers of the old ones, and insertions at the point you navigate to is then O(1). For example, inserting 1337 in: [4,3,2,1] [5,6,7,8,9,10] is done by just inserting it O(1) at the front: [4,3,2,1] [1337,5,6,7,8,9,10] As you rewind the list or re-modify it, you get cheap iteration. This is all done behind proper abstractions so you don't get to care about how it works unless you implement it of course. Memory use is still more intensive though.
|
# ? Apr 13, 2017 01:24 |
|
Doom Mathematic posted:If I understand the diagram correctly, with a singly-linked list, you can modify the head with no real penalty, but modifying the last entry in the list involves creating an entirely new list. Of course with laziness you can create monstrosities like 2-3 finger trees that are log time for just about everything
|
# ? Apr 13, 2017 01:24 |
I found it I found the functional weenie hat
|
|
# ? Apr 13, 2017 01:25 |
|
MononcQc posted:Problem with a doubly linked list is that to attach two elements together, one must not exist before the other, and they must not be modified, otherwise you'll mutate it. You can't have doubly-linked and immutable lists in that sense. Fun fact: the type of a zipper is the algebraic formal derivative of the type of the container its zipping over (up to isomorphism)
|
# ? Apr 13, 2017 01:28 |
|
Malcolm XML posted:Fun fact: the type of a zipper is the algebraic formal derivative of the type of the container its zipping over (up to isomorphism) I prefer the bit where oleg comes in and uses call/cc to make a generic one for any data type without any math required
|
# ? Apr 13, 2017 01:29 |
|
MononcQc posted:I prefer the bit where oleg comes in and uses call/cc to make a generic one for any data type without any math required If u say the word continuation 3 times in front of a repl Oleg will appear and solve your problem but in cps
|
# ? Apr 13, 2017 01:33 |
|
It's great that in fp iterating through a doubly linked list apparently requires allocation at every step
|
# ? Apr 13, 2017 01:52 |
|
Maybe your lists aren't lazy enough if they can't be doubly linked but this stuff always ties my brain into knots
|
# ? Apr 13, 2017 01:54 |
|
MononcQc posted:The standard image explaining it is this: simplified gc reasoning makes me wonder if anyone's doing FP runtimes on top of CHERI or any of the RISC-V memory protection models or if those types of mixes are boring/unnecessary in the first place
|
# ? Apr 13, 2017 01:55 |
|
JawnV6 posted:was wondering how you'd re-use an old object without a complex 'how did i get here' pointer model, but duh you just break out the new tree to the root and pathways into the old are one-way did someone say risc-v memory protection models my friend, fictional software taking advantage of fictional features of fictional microarchitectures is the entire rhyme and reason of risc-v
|
# ? Apr 13, 2017 02:14 |
|
there's one guy i follow on mastodon that exclusively posts about risc-v memory models and i just wonder how he does it so much without chafing like imagine all the goofy poo poo you could cram into 128b addresses. ASLR? lol just put a 'key' in the upper bits, they'll never find you!
|
# ? Apr 13, 2017 03:31 |
|
MononcQc posted:You can then put on your computer scientist and functional weenie hats (the latter reads "Make functional programming Characterizing the techniques in Okasaki as "no no, overall this is still O(1) per operation despite the cost of reversal being O(n)" is straight up incorrect. The final queue implementation is legit O(1) on every operation. The trick is to perform a step or two of reversing the second stack every time a new element is added to or removed from the queue through clever usage of lazy evaluation. That way the memoization you're talking about is only a constant factor speedup. The Laplace Demon fucked around with this message at 04:51 on Apr 13, 2017 |
# ? Apr 13, 2017 04:46 |
|
Doom Mathematic posted:If I understand the diagram correctly, with a singly-linked list, you can modify the head with no real penalty, but modifying the last entry in the list involves creating an entirely new list. The reason linked lists are a bad default list type is they have much worse CPU cache performance (among other things) than array lists. Google's rule of thumb for when to use a linked list over an array is if that you do an order of magnitude more removals/additions in the middle of the list than you do traversals, use a linked list; otherwise, use an array list. Which doesn't even apply to immutable singly linked lists, they're just lovely and slow and dumb. This is why every Haskell programmer that makes real software (oxymoron?) will tell you to stay the gently caress away from String and use one of the many Data.Text variants which are array-backed.
|
# ? Apr 13, 2017 06:33 |
|
gonadic io posted:gently caress mutability
i feel like these constraints let you reason locally about mutability and might negate a lot of the advantages an immutable data structure might have in improving reasoning about the program while also retaining the performance benefits of mutable data structures
|
# ? Apr 13, 2017 11:20 |
|
|
# ? May 22, 2024 09:36 |
|
here's a good article about how non-aliased mutability improves program understanding http://manishearth.github.io/blog/2015/05/18/the-problem-with-shared-mutability/ quote:Aliasing with mutability in a sufficiently complex, single-threaded program is effectively the same thing as accessing data shared across multiple threads without a lock
|
# ? Apr 13, 2017 11:23 |