|
Are there any standard libraries with a mutex type or synchronization type having an interface like this:code:
You can also call ReleaseLock without calling WaitForLock if you changed your mind in the mean-time. Access to GetInLine might have to be synchronized by the user, or maybe it can be called simultaneously (if you don't care about acquirers' relative order). This lets you release the resources of one mutex before you've blocked acquiring the resources of the "next" mutex, so you don't block other threads (if you're acquiring/releasing a sequence of mutexes).
|
# ? Jul 25, 2016 18:21 |
|
|
# ? Jun 6, 2024 01:02 |
|
rjmccall posted:i'm apparently conflating a couple different things rjmccall posted:you can only get mutable access to the value if the reference count is 1 (which you can dynamically query) rjmccall posted:the mutex type builds in knowledge of the thing it protects. accessing that memory is not implicit, you call a lock method which blocks and gives you back something that you can modify. it does support a try_lock sarehu posted:Are there any standard libraries with a mutex type or synchronization type having an interface like this:
|
# ? Jul 25, 2016 18:34 |
|
JawnV6 posted:it seems harder to reason about getinline/releaselock being a valid pattern than try_lock or just a total ordering over mutexed resources, what is it you want/expect the application thread to be doing while it's in line but not yet working? Releasing the previous mutex it holds, calling GetInLine for a multitude of mutices, and any general computation that doesn't require reading/writing the previous mutexed resource, maybe because it was copied out. An example would be multiple threads reading and modifying an in-memory tree, where they all start from the top, with a mutex on each node. If you have to wait for a child node before releasing the node you hold, then D threads trying to access the same key, where D is the depth of the tree, could lock up the entire tree.
|
# ? Jul 25, 2016 19:26 |
|
now I think you only brought this up to pluralize mutexes the "right" way taking ordering from a nonblocking call has an odd smell, and you're probably asking application code to do a total ordering anyway. even with that: what if I have a round robin scheduler, resources ABC, t1 gets in line for AB and is switched out, t2 gets in line for ABC, t1 comes back and gets in line for C.
|
# ? Jul 25, 2016 21:22 |
|
A partial ordering. With your example they would have to wait for the mutexes in the same order. Another advantage of this kind of mutex acquisition API, or one where you don't care about order, is that you can (with more work on the API) select over a set of mutexes that you're in line for, along with other stuff like an interrupt signal, and do work with whatever was acquired first (or stop working, if you got interrupted). It would only make sense to get in line on three mutexes if you either hold a preceding mutex and want to acquire the three before the next acquirer of the preceding mutex (thus get-in-line operations can't get intermingled like that), or you don't care, you just want to process the one that's available as soon as possible via some select-like functionality. (In which case you wouldn't want the mutex to be one where the order you "get in line" preserves ordering of acquisition, since it would hurt performance.) The general problem here is that most stdlib concurrency libraries require you to build your own tools on top of primitives and thus you don't really have a common library of concurrency utilities that you can compose together.
|
# ? Jul 25, 2016 22:14 |
|
when you've got a complicated concurrent architecture that can't be solved with the basic concurrency primitives and can't be clearly reasoned about (without lots of vague parentheticals), the problem ain't in the lack of equally complicated concurrency APIs
|
# ? Jul 25, 2016 22:35 |
|
muts ex
|
# ? Jul 25, 2016 23:25 |
|
idgaf i just PEEK the poo poo outta memory
|
# ? Jul 25, 2016 23:29 |
|
ynohtna posted:when you've got a complicated concurrent architecture that can't be solved with the basic concurrency primitives and can't be clearly reasoned about (without lots of vague parentheticals), the problem ain't in the lack of equally complicated concurrency APIs It isn't complicated architecture at all. All you need is two mutexes and you will need this sort of thing. For example, suppose you've got some shared object that you want to read and mutate -- and when you mutate it, you have to log that you did so. Then, naively, you get something like this: code:
So instead with a better API you can do this: code:
Note that you can chain your regions of exclusive access this way while maintaining ordering this even if lock is a read/write lock, or if it's some other sort of mutex from a completely unrelated concurrency library. Which is only part of what I mean when I say it's more composable.
|
# ? Jul 26, 2016 00:21 |
|
i just use nlog op
|
# ? Jul 26, 2016 00:32 |
|
how deep is your get in line queue
|
# ? Jul 26, 2016 01:52 |
|
FamDav posted:how deep is your get in line queue cause i really need to learn
|
# ? Jul 26, 2016 01:54 |
|
Deeper than yours.
|
# ? Jul 26, 2016 01:56 |
|
sarehu posted:Are there any standard libraries with a mutex type or synchronization type having an interface like this: semi-related not really just reminded me of it Java has a stamped lock so you can occ your locking https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html occ is the happiest form of cc
|
# ? Jul 26, 2016 02:01 |
|
Sweeper posted:semi-related not really just reminded me of it StampedLock posted:Stamp values may recycle after (no sooner than) one year of continuous operation. A stamp held without use or validation for longer than this period may fail to validate correctly. I'm the guy who makes a birthday cake for his StampedLocks.
|
# ? Jul 26, 2016 15:50 |
|
ok, so I have a second resource that I need to ensure is accessed in the same ordering as my primary resource ideally without continuing to hold the lock on the first you could just acquire a second lock under the first and release #1 before the actual write, only blocking a reader when a writer is waiting to grab the second lock. or you could get the same behavior if the logger is a separate thread with a serializing queue in front of it.
|
# ? Jul 26, 2016 17:53 |
|
JawnV6 posted:you could just acquire a second lock under the first and release #1 before the actual write, only blocking a reader when a writer is waiting to grab the second lock. Thus that doesn't really solve anything. JawnV6 posted:or you could get the same behavior if the logger is a separate thread with a serializing queue in front of it. And that doesn't compose well, unless you're happy replacing every mutex with a thread.
|
# ? Jul 26, 2016 18:24 |
|
back with part 3: compiler research has been stalled since 1975 im just a simple C programmer who empathizes with the hardware, so i genuinely do not understand the academic fascination with types sarehu posted:Thus that doesn't really solve anything. sarehu posted:And that doesn't compose well, unless you're happy replacing every mutex with a thread.
|
# ? Jul 30, 2016 20:48 |
JawnV6 posted:im just a simple C programmer who empathizes with the hardware, so i genuinely do not understand the academic fascination with types A good type system can catch a *ton* of errors for you. That's why people like strong type systems.
|
|
# ? Jul 30, 2016 20:56 |
|
JawnV6 posted:im just a simple C programmer who empathizes with the hardware, so i genuinely do not understand the academic fascination with types expressing your business logic in terms the hardware understands is kind of a drag.
|
# ? Jul 30, 2016 20:57 |
|
also, in terms of pure academic interests, type systems are a really good stepping stone to real program correctness proofs (ala Coq)
|
# ? Jul 30, 2016 20:59 |
|
jony neuemonic posted:expressing your business logic in terms the hardware understands is kind of a drag. Asymmetrikon posted:also, in terms of pure academic interests, type systems are a really good stepping stone to real program correctness proofs (ala Coq)
|
# ? Jul 30, 2016 21:06 |
|
JawnV6 posted:being on big beefy processor hardware, chucking away determinism, and floating on an abstract pile of unknowns would be a drag for me that's legit, i can see why that'd be weird if you're used to being close to the hardware.
|
# ? Jul 30, 2016 22:43 |
|
tbh I have no idea what you guys write code for (and get paid for it) because I spend most of my time making third party software play nice with each other. erlang seems the only thing talked about here that would be remotely relevant to what I do, but I couldn't use it even if I wanted to because the environment I work in (mobile OSes) restrict you to a single process. so most of the language features discussed here fall kinda flat with me. never have race conditions again thanks to continuations? but you can still have race conditions with reentrancy. write stateless code so you could not possibly have race conditions? but the whole purpose of my code is to change state, it literally doesn't do anything else (there's a couple parsers I guess?). and how do I write monadic code when the thread of execution gets lost in a dead end of framework code that never notifies me of completion (ask me about uikit animations!!). swift looks nice but is it ever reaching a stable form? kinda tired at code samples that stop parsing as valid code every few months, also why I'm not touching rust
|
# ? Jul 30, 2016 23:10 |
|
quote:Here, different languages are embedded via the <lang_name>{} notation, and each sub-language can have values from the base language. Notably, I use five different sub-languages here: this looks vile and naive
|
# ? Jul 30, 2016 23:21 |
|
quote:Rust is the first major language to promote composition over inheritance with traits. aaaaaaaaaaaaaa
|
# ? Jul 31, 2016 00:10 |
|
rjmccall posted:aaaaaaaaaaaaaa lmao what
|
# ? Jul 31, 2016 00:25 |
|
triple sulk posted:lmao what unless you want to argue major, that statement is very wrong
|
# ? Jul 31, 2016 05:35 |
|
FamDav posted:unless you want to argue major, that statement is very wrong i'm not disagreeing
|
# ? Jul 31, 2016 05:39 |
|
lol. rust traits aren't even "composition" in the sense of containment, they're more akin to typeclasses or interfaces
|
# ? Jul 31, 2016 08:09 |
|
I accidentally read a different PL-curious person's "senior thesis" the other day and... good lord. Advice for undergrads: do not talk about your senior thesis. I'm glad being a math major saved me from this life mistake.
|
# ? Jul 31, 2016 08:21 |
|
sarehu posted:A partial ordering. With your example they would have to wait for the mutexes in the same order. quote:This lets you release the resources of one mutex before you've blocked acquiring the resources of the "next" mutex, so you don't block other threads (if you're acquiring/releasing a sequence of mutexes). the advantage of your proposal is that you get partial ordering, but this also means you have to write your code to handle doing things out of order in order to take advantage of it. and then i guess you can turn it off by changing the magic ordering? i guess the disadvantage of your proposal is that it doesn't make it easier to compose atomic operations into one larger atomic section. sarehu posted:Are there any standard libraries with a mutex type or synchronization type having an interface like this: this looks a lot like read copy update, with the "reads are safe->writer waiting->reads unsafe->reads safe" lifecycle/flags/epochs, but with a magic linearizability thing implied by getting in line. although this api could be made to work for mutexes on tree like structures that admit out of order and partial updates, those are often hard to implement and the concurrency is intertwined with them for performance. speaking of which sarehu posted:An example would be multiple threads reading and modifying an in-memory tree, aside: you might want to look at the bw-tree/llama storage system. they make a fast b-tree with a miniature transaction system for structural (multi node) operations, and use a inode table instead of direct pointers between nodes & compare and swap entries, and insert delta records i'm also very unsure on how you're handling gc quote:they all start from the top, with a mutex on each node. If you have to wait for a child node before releasing the node you hold, then D threads trying to access the same key, where D is the depth of the tree, could lock up the entire tree. and that's why pessimistic locking requires deadlock detection it's worth mentioning that you don't always want to dispose of locks early unless you enjoy lost writes: another writer that saw an earlier version before you locked it could still be running after your writes have finished, so you'll want to keep it around for a while. quote:The general problem here is that most stdlib concurrency libraries require you to build your own tools on top of primitives and thus you don't really have a common library of concurrency utilities that you can compose together. unless you have a very big data structure or a lot of writers, one big lock works very well i guess this is only really a problem if you're writing a database and not using one.
|
# ? Aug 1, 2016 03:57 |
|
rjmccall posted:aaaaaaaaaaaaaa you have to admire the delicate nature of that claim "first major language to promote composition" because i am sure every language that had traits before won't be "major" and the major ones that did didn't "promote" them
|
# ? Aug 1, 2016 04:07 |
|
JawnV6 posted:being on big beefy processor hardware, chucking away determinism, and floating on an abstract pile of unknowns would be a drag for me minidracula fucked around with this message at 04:33 on Aug 1, 2016 |
# ? Aug 1, 2016 04:15 |
|
hi i want to build a just in time compiler for my dynamic language but i don't want to use any runtime information and infer it from annotations
|
# ? Aug 1, 2016 04:22 |
|
minidracula posted:On the flip side, Centaur (VIA) uses ACL2 to verify (parts of) their x86 cores. Anna Slobodova, Sol Swords, Jared Davis, and team really pushed that forward. Davis just moved to Apple's new formal verification team. They've written a number of papers and given a number of presentations on the benefits they got out of it. Earlier work at AMD with ACL2 (formally verifying floating point units) has been around for a while too. VikingofRock posted:A good type system can catch a *ton* of errors for you. That's why people like strong type systems. quote:If you compile to Python, well, your program will probably run pretty slowly and you lose any static typing guarantees.
|
# ? Aug 1, 2016 17:04 |
|
i don't think that's type-mysticism, i think that guy is just a moron?
|
# ? Aug 1, 2016 17:10 |
|
that statement is indeed wrong and suffers from types-as-mysticism foolishness, but not in the way you're saying compiling to python doesn't lose static type safety because the type safety was already checked by the compiler. compiling to assembly doesn't lose type safety either if you're not checking types, but are just rewriting source structures to similar structures in the target language, what you have is not a "compiler" because you aren't really implementing the source language. typically you can't even do any kind of faithful translation on a statically-typed language without a certain level of type-checking — even mlton, which famously "didn't have a type-checker", actually did have one, it just didn't check everything and it produced abysmal diagnostics
|
# ? Aug 1, 2016 17:18 |
|
JawnV6 posted:there's no reason this mythical compile-to-python scheme couldn't 'tuple up ORIG_TYPE and carry that around for checks. it's kinda unimaginative as an insult and falls into the type-mysticism blind spot The reason why you shouldn't compile to Python is that it sucks. Take it from me, I have written a compiler that targets Python. Getting reasonable numeric types out of Python also sinks your performance. It's not related to dynamic types, it's related to the fact that Python plays fast and loose with its numeric types (like most dynamic languages, but I think that's a design decision, and you could easily design a dynamically typed language with some goddamn numeric discipline).
|
# ? Aug 1, 2016 18:04 |
|
|
# ? Jun 6, 2024 01:02 |
|
Athas posted:(like most dynamic languages, but I think that's a design decision, and you could easily design a dynamically typed language with some goddamn numeric discipline). iirc, common lisp is dynamic as heck but has really sane numeric types. pretty hazy on the details though.
|
# ? Aug 1, 2016 19:27 |