Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
tef
May 30, 2004

-> some l-system crap ->

Subjunctive posted:

cooperative multitasking is a lot easier to reason about than preemption. you don’t have to worry about flaky data races due to unexpected preemption points, because your preemption points are all explicit, if not necessarily syntactic

it's easier to manage when you have like maybe 10 coroutines but when you have thousands, it's easy to lock up the entire system

i do recommend people use "one big lock" when it comes to concurrency for very similar reasons

Adbot
ADBOT LOVES YOU

tef
May 30, 2004

-> some l-system crap ->

pseudorandom name posted:

its too bad programmers aren't any good at making state machines otherwise we wouldn't need any of this green threads or async/await nonsense

the history of programming is the history of trying to implement a state machine without actually having to write one

tef
May 30, 2004

-> some l-system crap ->

rjmccall posted:

but that was fully understood as a consequence, and we’ve been rigorous about telling people that no, they are not allowed to block tasks on work that is not currently running, and we are not going to find a way to make their code work

plus "here's gcd" i guess, so people didn't have to invent the universe to run something in the background

tef
May 30, 2004

-> some l-system crap ->

Shaggar posted:

i just have distaste for the terminology of "non-blocking I/O" because it implies the I/O has happened when it hasn't. you've fired it into a pile of caches and power backups that let you pretend it doesnt have to block eventually. its mostly safe but i would prefer some other term

i have bad news about blocking io for very similar reasons

tef
May 30, 2004

-> some l-system crap ->
blocking/non blocking refers to the syscall. and synchronous/asynchronous, here, refers to the programming model

a synchronous program can make blocking calls, and it can make non blocking calls and poll for updates (hello select())

an asynchronous program can make make non blocking or blocking calls, but if you call a blocking operation inside a coroutine, the entire thing pauses, because asynchronous programming relies on apartment/cooperative threading

and it's funny because shaggar might as well be yelling about close() throwing an exception

tef
May 30, 2004

-> some l-system crap ->
it is a common misconception that async means 'fire and forget'

tef
May 30, 2004

-> some l-system crap ->
true but i think we're all bored and posting

tef
May 30, 2004

-> some l-system crap ->

Subjunctive posted:

_exit() and let Ritchie clean it up

how to find a loop in a linked list: free each item in turn, and if there's a segfault, there's a loop

tef
May 30, 2004

-> some l-system crap ->
turns out knowing the right way to build an app doesn't give you any more motivation to write the code

if anything, knowing the chore ahead is a good deterrent

tef
May 30, 2004

-> some l-system crap ->

MononcQc posted:

yeah but the thread is mostly programming language enthusiasts/nerds discussing the intricacies and implications of specific concepts and their implementations. Like it’s often about languages themselves rather than using them.

this thread slapfights about checked exceptions, and the other thread is where we complain about programming

tef
May 30, 2004

-> some l-system crap ->

rotor posted:

characterizing code by which mask you feel it would be wearing if it was an actor in traditional japanese noh theater

Only registered members can see post attachments!

tef
May 30, 2004

-> some l-system crap ->
i think it boils down to "if you have to ask, you probably don't need it"

tef
May 30, 2004

-> some l-system crap ->

Sapozhnik posted:

more accuracy is better, right?

x[0] = -6
x[1] = 64

x[n] = 82 − (1824 − 6048/x[n-2]) / x[n-1]

so this sequence converges to 36, under half and single floats

under doubles, it converges to 42, which is the wrong answer

https://etna.math.kent.edu/vol.52.2020/pp358-369.dir/pp358-369.pdf

tef
May 30, 2004

-> some l-system crap ->

redleader posted:

floating point is a mistake and anyone who says otherwise is gaslighting you

floating point is great, actually

tef
May 30, 2004

-> some l-system crap ->
see, like, that's the thing about floating point, it's better than what people try to replace it with

for ex:

people get mad about negative zero, but that's because they think of floating point as "exact numbers" over "approximate numbers after rounding". in reality, -0 is "a very small negative number that's effectively zero" much in the same way +0 is "a very small number that's effectively zero", and that's why they're different. sign preservation is useful

people get mad about NaN, and i am sympathetic, but there is a real use. if you're doing a big rear end calculation, you don't want to do error checking after every operation. it makes it hard to do pipelining or superscalar stuff or whatever the kids are into these days, but in more practical terms: error checking after every arithmetic op greatly inflates the size of your program and sabotages the speed. therefore, floating point has an error value, NaN, and just to ensure that "f(x) == f(y)" doesn't accidentally do the right thing, let's make it so one error can never equal another.

then comes subnormal and gradual underflow. if you ask me, this is the neatest trick that floating point handles. when you're storing a number as a float, you could represent the same number in different ways, like 10 as 100 ** -1 or 10 ** 1, and so floating point specifies that all numbers have a singular standard form. floating point then goes onto say "actually, here are some not normal ones, just so we can extend the range of precision". it's wringing out all the last drops of precision and it's ingenious

if there's any real improvement to be made for floating point, it's to have a decimal coded mantissa. not for any real accuracy or precision, but because it'll be a little more humane when 0.1+0.2 == 0.3

tef
May 30, 2004

-> some l-system crap ->
numbers are just cursed, floating point's just a little bit more obviously so

tef
May 30, 2004

-> some l-system crap ->

rotor posted:

decimal, from the roots deci- meaning "base 10" and -mal meaning "bad" or "wrong"

there's like one, maybe two good usecases for decimal outside of just "humans expect math to work this way" like money

one of them is "with binary coded decimal, it's way easier to translate game scores into tile offsets", at least on dmg era gameboys

and the other is pretty much the same deal, like facebook found that itoa was taking up a disproportionate amount of time in logging

tef
May 30, 2004

-> some l-system crap ->
there's this long post about posits being a scam

https://marc-b-reynolds.github.io/math/2019/02/06/Posit1.html

anyway somewhere halfway through it, there's this graph of relative error and posits do not look great

i'd attach it but i'm too lazy to resize a screenshot to make it upload

tef
May 30, 2004

-> some l-system crap ->

BobHoward posted:

gustafson often promotes posits by claiming they'll enable naive programmers to safely implement equations right out of math textbooks with no need to think about numerical stability, but this is basically a lie

yeah

honestly, if you really need the "extra good" calculations you probably need a real symbolic calculator

tef
May 30, 2004

-> some l-system crap ->
aside, there's always double-double floating point https://www.cs.cmu.edu/~quake/robust.html like this stuff :2bong:

tef
May 30, 2004

-> some l-system crap ->

Sapozhnik posted:

actually that's an interesting question, how does big-boy financial software do math anyway? (very carefully)

in excel ????

tef
May 30, 2004

-> some l-system crap ->

redleader posted:

langs shouldn't have built in number types at all.

we tried this, it succ'd

tef
May 30, 2004

-> some l-system crap ->

rjmccall posted:

okay, i skimmed the post, and it is describing this byte to template expansion as if it’s way more novel and interesting than it really is. pretty sure there were jits doing that in the 90’s

yep

thing is it probably had some name like "direct threading interpreter"

tef
May 30, 2004

-> some l-system crap ->
i mean, it's pretty common for folk techniques to be republished in academia every five to seven years so

tef
May 30, 2004

-> some l-system crap ->

Dijkstracula posted:

also the notion of "copy-and-patch JIT"s sound a heck of a lot like the so-called copy-and-annotate binary translation techniques that were popular back in the day, perhaps you would like to learn more starting at about the 15 minute mark of https://www.infoq.com/presentations/dynamic-analysis-tools/

fwiw the paper https://dl.acm.org/doi/pdf/10.1145/3485513 has this as the contributions

quote:

Our contributions are:

(1) The concept of a binary stencil, which is a pre-built implementation of an AST node or bytecode opcode with missing values (immediate literals, stack variable offsets, and branch and call targets) to be patched in at runtime.

(2) An algorithm that uses a library with many binary stencil variants to emit optimized machine code. There are two types of variants: one that enumerates different parameter configurations (whether they are literals, in different registers, or on the stack) and one that enumerates
different code patterns (a single AST node/bytecode or a supernode of a common AST subtree/bytecode sequence).

and they do cite "Optimizing direct threaded code by selective inlining" in 1998 https://dl.acm.org/doi/10.1145/277650.277743

it's kinda not the same but yeah it ain't that different either

tef
May 30, 2004

-> some l-system crap ->
luajit did some type specialisation iirc, this one's more "hey, what if instead of calling a c function per bytecode, we jammed the c code together and made a c-function per python function"

there's gonna be some speedup, without a lot of overhead or startup costs, it's alright

tef
May 30, 2004

-> some l-system crap ->

Share Bear posted:

the transformation to writing everything as a script into writing everything as a module is easy to trip up,

cause module packaging requires the magic words __init__py and __main__py (or cramming main into something else via init)

making modules is easy once you learn the magic words. theres a blog post i share about this when it comes up that i cannot find in my bookmarks

python2 eol'd 4 years ago bud

tef
May 30, 2004

-> some l-system crap ->
feel like i'm reading the steam forums and it's a thread about lazy entitled devs who could just simply ship a product

tef
May 30, 2004

-> some l-system crap ->
today i learned that ?>!:,< is a valid key name in yaml

tef
May 30, 2004

-> some l-system crap ->
"different things should look different" tired, boring

"everything should look the same, because i read sicp once" incredible, awe inspiring

it's no wonder lisp did everything first, well, except popularity and adoption

tef
May 30, 2004

-> some l-system crap ->
lispers will tell you "code is data" and then write a long essay explaining that 9/11 wouldn't happen if planes were more like lisp, where code isn't data

tef
May 30, 2004

-> some l-system crap ->
https://paulgraham.com/hijack.html fwiw

tef
May 30, 2004

-> some l-system crap ->

Subjunctive posted:

then someone mistypes “RWLock” and “RLock”—or they misunderstand closure rules—and you have a silent race and then you’re chasing thousands of data races in just your test suite

this is a really good blog post

tef
May 30, 2004

-> some l-system crap ->
the thing that gets me about rust is like every time you click on a rust post it's got some title like "shared access to an aliased array using mem:swap", and it's "how to do a[0]=1 and a[1]=2" under the watchful gaze of the borrow checker

tef
May 30, 2004

-> some l-system crap ->

FlapYoJacks posted:

Rust is excellent because of the borrow checker.

my problem isn't the borrow checker exactly, it's the weird safety fetishists who've never peeked behind the scenes are even more irritating than your average pink floyd fan

I mean, the borrow checker is quite novel, but the "therefore it's the only way to write safe programs, and so any costs incurred are worth it" is the stretch i have trouble following

in practice, people opt for one of three options to appease the borrow checker

1. Vec<> in a trenchcoat. Passing around integer offsets, or wrapped integer offsets. This is how most people avoid the borrow checker's ire, until they need to do something like "delete an entry in a tree with only a &ref to the parent", and then they move to option two.

2. unsafe in a trenchcoat. aka how every stdlib type works.

3. Abstract: In this paper we,.... and you end up with an extra 5000 lines of code to avoid writing one unsafe block. Plus it only works in nightly.


the thing about the borrow checker, and rust as a whole, isn't really about writing "safe code", it's guaranteeing safe uses of an api, which is close enough for most purposes.

tef
May 30, 2004

-> some l-system crap ->
hey left recursion nerds, what's your favourite way to handle left recursion in top down parsing

i have this parsing evaluation grammar, i want to extend it with means of defining stuff like infix operators. like `e := e "+" e`, and i'm mulling over the various ways of trying to make it work

1. don't do it, and just force users to write right recursive grammars

2. rewrite the grammar internally to something without left recursion, but return the original parse tree

3. shove an operator precedence parser into the back of the peg engine, and let the user define operators for a given parse rule

3. extend the peg algorithm to handle left recursive grammars, either through bounded recursion or memoization

option 1 isn't great, option 2 and 4 are kinda the same in that they both lose the execution model of pegs, option 3 is basically a pratt parser, or something with precedence climbing. i'm really not sure what the best option is. it feels like "just handle left recursion" is a more invisible ux, but "here is a means to define operators" is a better way to define grammars.

i've been leaning towards implementing the memoization trick to just make left recursion work, but then it's easy to write something left associative over right associative by accident, and i'm not quite sure if making operator precedence implicit is the right choice either

tef
May 30, 2004

-> some l-system crap ->
yep, that's the one https://web.cs.ucla.edu/~todd/research/pepm08.pdf

the other one i'm aware of is the https://arxiv.org/pdf/1207.0443.pdf bounded left recursion one

tef
May 30, 2004

-> some l-system crap ->

Nomnom Cookie posted:

the real answer is it sounds to me like you’ve boxed yourself in by choosing peg too early. first figure out what set of accepted grammars has the right ergonomics then make a judgment on how you’re going to handle them. imo

i mean, i could give up on having negative lookahead and ordered choice, yes, but i've decided that returning shift/reduce errors isn't an improvement

especially because i'm also parsing markup languages

Adbot
ADBOT LOVES YOU

tef
May 30, 2004

-> some l-system crap ->

Nomnom Cookie posted:

the real answer is it sounds to me like you’ve boxed yourself in by choosing peg too early. first figure out what set of accepted grammars has the right ergonomics then make a judgment on how you’re going to handle them. imo

that's kinda what i'm doing: i have pegs + left recursive features, and i'm trying to make this judgement

Nomnom Cookie posted:

does trying to snow people by throwing around jargon often work for you? did I say some words that sounded like “I think you should have gone with LALR” to you?

like, "you chose peg too early" ~> "you chose top down methods too early, if left recursion is a problem you want to solve"

"figure out the right set & handle them" ~> "left recursion generally means handling things bottom up in some form or another, which inevitably means building some sort of LR like automaton to handle the nondeterminism"

i did assume you weren't suggesting any of the methods i'd already outlined to bolt things onto the peg engine, like cancellation parsing or precedence climbing, you were suggesting i go back and find a "one size fits all approach", but really, unless you were about to spring an earley parser on me, you were going to suggest some form of LR parsing

i mean, sure, maybe you're a big fan of demer's generalised left corner parser but i figured if that was the case you'd have been infodumping and not shitposting


Nomnom Cookie posted:

pop up a message box that says here’s a lookahead kid go get yourself a real grammar

but mostly this set the vibe, hth

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply