|
Subjunctive posted:cooperative multitasking is a lot easier to reason about than preemption. you don’t have to worry about flaky data races due to unexpected preemption points, because your preemption points are all explicit, if not necessarily syntactic it's easier to manage when you have like maybe 10 coroutines but when you have thousands, it's easy to lock up the entire system i do recommend people use "one big lock" when it comes to concurrency for very similar reasons
|
# ¿ Sep 10, 2023 20:34 |
|
|
# ¿ May 12, 2024 15:44 |
|
pseudorandom name posted:its too bad programmers aren't any good at making state machines otherwise we wouldn't need any of this green threads or async/await nonsense the history of programming is the history of trying to implement a state machine without actually having to write one
|
# ¿ Sep 10, 2023 23:25 |
|
rjmccall posted:but that was fully understood as a consequence, and we’ve been rigorous about telling people that no, they are not allowed to block tasks on work that is not currently running, and we are not going to find a way to make their code work plus "here's gcd" i guess, so people didn't have to invent the universe to run something in the background
|
# ¿ Sep 11, 2023 00:06 |
|
Shaggar posted:i just have distaste for the terminology of "non-blocking I/O" because it implies the I/O has happened when it hasn't. you've fired it into a pile of caches and power backups that let you pretend it doesnt have to block eventually. its mostly safe but i would prefer some other term i have bad news about blocking io for very similar reasons
|
# ¿ Sep 12, 2023 00:25 |
|
blocking/non blocking refers to the syscall. and synchronous/asynchronous, here, refers to the programming model a synchronous program can make blocking calls, and it can make non blocking calls and poll for updates (hello select()) an asynchronous program can make make non blocking or blocking calls, but if you call a blocking operation inside a coroutine, the entire thing pauses, because asynchronous programming relies on apartment/cooperative threading and it's funny because shaggar might as well be yelling about close() throwing an exception
|
# ¿ Sep 12, 2023 01:27 |
|
it is a common misconception that async means 'fire and forget'
|
# ¿ Sep 12, 2023 06:52 |
|
true but i think we're all bored and posting
|
# ¿ Sep 12, 2023 07:27 |
|
Subjunctive posted:_exit() and let Ritchie clean it up how to find a loop in a linked list: free each item in turn, and if there's a segfault, there's a loop
|
# ¿ Sep 12, 2023 15:06 |
|
turns out knowing the right way to build an app doesn't give you any more motivation to write the code if anything, knowing the chore ahead is a good deterrent
|
# ¿ Sep 21, 2023 14:57 |
|
MononcQc posted:yeah but the thread is mostly programming language enthusiasts/nerds discussing the intricacies and implications of specific concepts and their implementations. Like it’s often about languages themselves rather than using them. this thread slapfights about checked exceptions, and the other thread is where we complain about programming
|
# ¿ Sep 22, 2023 14:19 |
|
rotor posted:characterizing code by which mask you feel it would be wearing if it was an actor in traditional japanese noh theater
|
# ¿ Sep 27, 2023 01:43 |
|
i think it boils down to "if you have to ask, you probably don't need it"
|
# ¿ Sep 27, 2023 15:16 |
|
Sapozhnik posted:more accuracy is better, right? x[0] = -6 x[1] = 64 x[n] = 82 − (1824 − 6048/x[n-2]) / x[n-1] so this sequence converges to 36, under half and single floats under doubles, it converges to 42, which is the wrong answer https://etna.math.kent.edu/vol.52.2020/pp358-369.dir/pp358-369.pdf
|
# ¿ Oct 20, 2023 11:25 |
|
redleader posted:floating point is a mistake and anyone who says otherwise is gaslighting you floating point is great, actually
|
# ¿ Oct 21, 2023 14:03 |
|
see, like, that's the thing about floating point, it's better than what people try to replace it with for ex: people get mad about negative zero, but that's because they think of floating point as "exact numbers" over "approximate numbers after rounding". in reality, -0 is "a very small negative number that's effectively zero" much in the same way +0 is "a very small number that's effectively zero", and that's why they're different. sign preservation is useful people get mad about NaN, and i am sympathetic, but there is a real use. if you're doing a big rear end calculation, you don't want to do error checking after every operation. it makes it hard to do pipelining or superscalar stuff or whatever the kids are into these days, but in more practical terms: error checking after every arithmetic op greatly inflates the size of your program and sabotages the speed. therefore, floating point has an error value, NaN, and just to ensure that "f(x) == f(y)" doesn't accidentally do the right thing, let's make it so one error can never equal another. then comes subnormal and gradual underflow. if you ask me, this is the neatest trick that floating point handles. when you're storing a number as a float, you could represent the same number in different ways, like 10 as 100 ** -1 or 10 ** 1, and so floating point specifies that all numbers have a singular standard form. floating point then goes onto say "actually, here are some not normal ones, just so we can extend the range of precision". it's wringing out all the last drops of precision and it's ingenious if there's any real improvement to be made for floating point, it's to have a decimal coded mantissa. not for any real accuracy or precision, but because it'll be a little more humane when 0.1+0.2 == 0.3
|
# ¿ Oct 23, 2023 19:22 |
|
numbers are just cursed, floating point's just a little bit more obviously so
|
# ¿ Oct 23, 2023 19:22 |
|
rotor posted:decimal, from the roots deci- meaning "base 10" and -mal meaning "bad" or "wrong" there's like one, maybe two good usecases for decimal outside of just "humans expect math to work this way" like money one of them is "with binary coded decimal, it's way easier to translate game scores into tile offsets", at least on dmg era gameboys and the other is pretty much the same deal, like facebook found that itoa was taking up a disproportionate amount of time in logging
|
# ¿ Oct 23, 2023 23:20 |
|
there's this long post about posits being a scam https://marc-b-reynolds.github.io/math/2019/02/06/Posit1.html anyway somewhere halfway through it, there's this graph of relative error and posits do not look great i'd attach it but i'm too lazy to resize a screenshot to make it upload
|
# ¿ Oct 23, 2023 23:27 |
|
BobHoward posted:gustafson often promotes posits by claiming they'll enable naive programmers to safely implement equations right out of math textbooks with no need to think about numerical stability, but this is basically a lie yeah honestly, if you really need the "extra good" calculations you probably need a real symbolic calculator
|
# ¿ Oct 25, 2023 01:54 |
|
aside, there's always double-double floating point https://www.cs.cmu.edu/~quake/robust.html like this stuff
|
# ¿ Oct 25, 2023 01:57 |
|
Sapozhnik posted:actually that's an interesting question, how does big-boy financial software do math anyway? (very carefully) in excel ????
|
# ¿ Oct 25, 2023 03:37 |
|
redleader posted:langs shouldn't have built in number types at all. we tried this, it succ'd
|
# ¿ Oct 25, 2023 18:02 |
|
rjmccall posted:okay, i skimmed the post, and it is describing this byte to template expansion as if it’s way more novel and interesting than it really is. pretty sure there were jits doing that in the 90’s yep thing is it probably had some name like "direct threading interpreter"
|
# ¿ Jan 11, 2024 01:01 |
|
i mean, it's pretty common for folk techniques to be republished in academia every five to seven years so
|
# ¿ Jan 11, 2024 02:26 |
|
Dijkstracula posted:also the notion of "copy-and-patch JIT"s sound a heck of a lot like the so-called copy-and-annotate binary translation techniques that were popular back in the day, perhaps you would like to learn more starting at about the 15 minute mark of https://www.infoq.com/presentations/dynamic-analysis-tools/ fwiw the paper https://dl.acm.org/doi/pdf/10.1145/3485513 has this as the contributions quote:Our contributions are: and they do cite "Optimizing direct threaded code by selective inlining" in 1998 https://dl.acm.org/doi/10.1145/277650.277743 it's kinda not the same but yeah it ain't that different either
|
# ¿ Jan 11, 2024 02:42 |
|
luajit did some type specialisation iirc, this one's more "hey, what if instead of calling a c function per bytecode, we jammed the c code together and made a c-function per python function" there's gonna be some speedup, without a lot of overhead or startup costs, it's alright
|
# ¿ Jan 11, 2024 17:29 |
|
Share Bear posted:the transformation to writing everything as a script into writing everything as a module is easy to trip up, python2 eol'd 4 years ago bud
|
# ¿ Jan 12, 2024 16:54 |
|
feel like i'm reading the steam forums and it's a thread about lazy entitled devs who could just simply ship a product
|
# ¿ Jan 12, 2024 19:45 |
|
today i learned that ?>!:,< is a valid key name in yaml
|
# ¿ Jan 14, 2024 02:52 |
|
"different things should look different" tired, boring "everything should look the same, because i read sicp once" incredible, awe inspiring it's no wonder lisp did everything first, well, except popularity and adoption
|
# ¿ Jan 25, 2024 12:09 |
|
lispers will tell you "code is data" and then write a long essay explaining that 9/11 wouldn't happen if planes were more like lisp, where code isn't data
|
# ¿ Jan 25, 2024 12:12 |
|
https://paulgraham.com/hijack.html fwiw
|
# ¿ Jan 25, 2024 12:39 |
|
Subjunctive posted:then someone mistypes “RWLock” and “RLock”—or they misunderstand closure rules—and you have a silent race and then you’re chasing thousands of data races in just your test suite this is a really good blog post
|
# ¿ Jan 25, 2024 21:25 |
|
the thing that gets me about rust is like every time you click on a rust post it's got some title like "shared access to an aliased array using mem:swap", and it's "how to do a[0]=1 and a[1]=2" under the watchful gaze of the borrow checker
|
# ¿ Jan 25, 2024 22:19 |
|
FlapYoJacks posted:Rust is excellent because of the borrow checker. my problem isn't the borrow checker exactly, it's the weird safety fetishists who've never peeked behind the scenes are even more irritating than your average pink floyd fan I mean, the borrow checker is quite novel, but the "therefore it's the only way to write safe programs, and so any costs incurred are worth it" is the stretch i have trouble following in practice, people opt for one of three options to appease the borrow checker 1. Vec<> in a trenchcoat. Passing around integer offsets, or wrapped integer offsets. This is how most people avoid the borrow checker's ire, until they need to do something like "delete an entry in a tree with only a &ref to the parent", and then they move to option two. 2. unsafe in a trenchcoat. aka how every stdlib type works. 3. Abstract: In this paper we,.... and you end up with an extra 5000 lines of code to avoid writing one unsafe block. Plus it only works in nightly. the thing about the borrow checker, and rust as a whole, isn't really about writing "safe code", it's guaranteeing safe uses of an api, which is close enough for most purposes.
|
# ¿ Jan 25, 2024 23:25 |
|
hey left recursion nerds, what's your favourite way to handle left recursion in top down parsing i have this parsing evaluation grammar, i want to extend it with means of defining stuff like infix operators. like `e := e "+" e`, and i'm mulling over the various ways of trying to make it work 1. don't do it, and just force users to write right recursive grammars 2. rewrite the grammar internally to something without left recursion, but return the original parse tree 3. shove an operator precedence parser into the back of the peg engine, and let the user define operators for a given parse rule 3. extend the peg algorithm to handle left recursive grammars, either through bounded recursion or memoization option 1 isn't great, option 2 and 4 are kinda the same in that they both lose the execution model of pegs, option 3 is basically a pratt parser, or something with precedence climbing. i'm really not sure what the best option is. it feels like "just handle left recursion" is a more invisible ux, but "here is a means to define operators" is a better way to define grammars. i've been leaning towards implementing the memoization trick to just make left recursion work, but then it's easy to write something left associative over right associative by accident, and i'm not quite sure if making operator precedence implicit is the right choice either
|
# ¿ Jan 27, 2024 18:05 |
|
yep, that's the one https://web.cs.ucla.edu/~todd/research/pepm08.pdf the other one i'm aware of is the https://arxiv.org/pdf/1207.0443.pdf bounded left recursion one
|
# ¿ Jan 27, 2024 18:55 |
|
Nomnom Cookie posted:the real answer is it sounds to me like you’ve boxed yourself in by choosing peg too early. first figure out what set of accepted grammars has the right ergonomics then make a judgment on how you’re going to handle them. imo i mean, i could give up on having negative lookahead and ordered choice, yes, but i've decided that returning shift/reduce errors isn't an improvement especially because i'm also parsing markup languages
|
# ¿ Jan 28, 2024 00:12 |
|
|
# ¿ May 12, 2024 15:44 |
|
Nomnom Cookie posted:the real answer is it sounds to me like you’ve boxed yourself in by choosing peg too early. first figure out what set of accepted grammars has the right ergonomics then make a judgment on how you’re going to handle them. imo that's kinda what i'm doing: i have pegs + left recursive features, and i'm trying to make this judgement Nomnom Cookie posted:does trying to snow people by throwing around jargon often work for you? did I say some words that sounded like “I think you should have gone with LALR” to you? like, "you chose peg too early" ~> "you chose top down methods too early, if left recursion is a problem you want to solve" "figure out the right set & handle them" ~> "left recursion generally means handling things bottom up in some form or another, which inevitably means building some sort of LR like automaton to handle the nondeterminism" i did assume you weren't suggesting any of the methods i'd already outlined to bolt things onto the peg engine, like cancellation parsing or precedence climbing, you were suggesting i go back and find a "one size fits all approach", but really, unless you were about to spring an earley parser on me, you were going to suggest some form of LR parsing i mean, sure, maybe you're a big fan of demer's generalised left corner parser but i figured if that was the case you'd have been infodumping and not shitposting Nomnom Cookie posted:pop up a message box that says here’s a lookahead kid go get yourself a real grammar but mostly this set the vibe, hth
|
# ¿ Jan 28, 2024 03:10 |