|
what does rust do for memory management today? or do i not want to open that box?
|
# ? Oct 19, 2018 14:52 |
|
|
# ? Jun 12, 2024 15:34 |
akadajet posted:what does rust do for memory management today? or do i not want to open that box? isnt the point of the language that it's nothing
|
|
# ? Oct 19, 2018 14:54 |
|
why would you want GC in a language that does the right thing and drops unowned data what is this person doing with their life
|
# ? Oct 19, 2018 14:56 |
|
akadajet posted:what does rust do for memory management today? or do i not want to open that box? it inserts allocation/deallocation calls at compile time as part of its "lifetime" concept, although if you need reference counting for whatever reason it's available
|
# ? Oct 19, 2018 14:57 |
|
key point is that there was no tracing/sweeping gc (until withoutboats started doing "research")
|
# ? Oct 19, 2018 15:00 |
|
Don't knock GC until you've checked out the hoops eg. the crossbeam lib has to jump through to get things any p-lang gives you for free, with their epoch stuff. Then feel free to knock GC but it's not like a GC lib means people will write radically different rust.
|
# ? Oct 19, 2018 15:26 |
|
That Rust garbage collector posted:What is the state of the project?
|
# ? Oct 19, 2018 15:30 |
|
what is my complicated language with a pissy compiler missing, i know, GC stalls
|
# ? Oct 19, 2018 16:59 |
|
yes i'm sure the goal of the author is to have gc through entire codebases god forbid you'd want to make things like embedded scripting easier
|
# ? Oct 19, 2018 19:30 |
|
don't be loving stupid. how dare you even think of having more than one memory management paradigm in a language
|
# ? Oct 19, 2018 23:00 |
|
redleader posted:don't be loving stupid. how dare you even think of having more than one memory management paradigm in a language SLUB-Nigurath
|
# ? Oct 19, 2018 23:26 |
|
redleader posted:don't be loving stupid. how dare you even think of having more than one memory management paradigm in a language unironically this
|
# ? Oct 19, 2018 23:52 |
|
right so then why does rust have such strong restrictions on ownership and all the lifetime crap to guarantee use-after-free doesn't happen without needing GC because mozilla can't do a browser in c++ without dangling pointers everywhere and now rust is getting GC anyway this is very lol
|
# ? Oct 20, 2018 00:32 |
|
Kevin Mitnick P.E. posted:because mozilla can't do a browser in c++ without dangling pointers everywhere people like what they like
|
# ? Oct 20, 2018 00:36 |
|
Kevin Mitnick P.E. posted:and now rust is getting GC anyway no one in their right mind is going to use this, and rust is still a cool language
|
# ? Oct 20, 2018 00:58 |
|
Kevin Mitnick P.E. posted:and now rust is getting GC anyway I mean if you count a library, I guess but you can write a dumbshit library in any language, I mean, some joker wrote a thing that lets you run javascript as a server
|
# ? Oct 20, 2018 01:15 |
|
yeah, there's nothing inconsistent about building a GC system in rust. the point of the language is to be a memory-safe systems language, not "GC bad, precise good". I'm not sure about this assertion being necessarily true, but a good GC system can probably outperform ref counting on some problems (and vice versa)
|
# ? Oct 20, 2018 01:32 |
|
quote:some joker wrote a thing that lets you run javascript as a server
|
# ? Oct 20, 2018 01:34 |
|
prisoner of waffles posted:I'm not sure about this assertion being necessarily true, but a good GC system can probably outperform ref counting on some problems (and vice versa)
|
# ? Oct 20, 2018 01:34 |
|
if you do refcounting and a vast number of objects are referred to from a single place, and that value is freed, then you may need to free the vast number of objects at once which could possibly stop the world and hold up the program. Some real-time or generational GCs could in some circumstances perform better and more predictably (latency-wise at least) than refcounting.
|
# ? Oct 20, 2018 02:20 |
|
gc better than refcounting, got it
|
# ? Oct 20, 2018 05:07 |
|
Helicity posted:it inserts allocation/deallocation calls at compile time as part of its "lifetime" concept this is kind of a weird way to say "some data structures in the standard library call malloc or w/e when it makes sense to do so, and so can you." there's no compiler magic involved.
|
# ? Oct 20, 2018 05:17 |
|
rusts drop checking rules can get really hairy when types holding lifetimes are involved, though also its the source of the clunkiest syntax ever quote:In the meantime, there is an unstable attribute that one can use to assert (unsafely) that a generic type's destructor is guaranteed to not access any expired data, even if its type gives it the capability to do so. tinaun fucked around with this message at 05:28 on Oct 20, 2018 |
# ? Oct 20, 2018 05:23 |
|
MononcQc posted:if you do refcounting and a vast number of objects are referred to from a single place, and that value is freed, then you may need to free the vast number of objects at once which could possibly stop the world and hold up the program. Some real-time or generational GCs could in some circumstances perform better and more predictably (latency-wise at least) than refcounting. i'm sure this is a really dumb noob question but, why do GC and GC-esuqe systems not defer/stagger the freeing of a shitload of objects versus making the decision "oh we have a shitload of objects to free, better fuckin halt everything until we nuke all of them right this moment"
|
# ? Oct 20, 2018 05:23 |
|
yeah, I was going to ask about that earlier — is there any precedent for “freeing” an object to throw it on a queue and another thread taking care of running the destructor? i.e. I guess GC finalizers but in a non-GC system
|
# ? Oct 20, 2018 05:29 |
|
Lutha Mahtin posted:i'm sure this is a really dumb noob question but, why do GC and GC-esuqe systems not defer/stagger the freeing of a shitload of objects versus making the decision "oh we have a shitload of objects to free, better fuckin halt everything until we nuke all of them right this moment" well some of them kinda do or at least try to do the heavy lifting concurrently with app threads. there’s always a risk of app threads allocating faster than your GC can keep up and then it’s STW time though except for azul’s magic GC which I suspect mainly works by putting your throughput in the toilet, enlisting app threads to do GC work interleaved with real work. it does however guarantee no STW ever
|
# ? Oct 20, 2018 06:00 |
|
oh and fully STW GC is best for overall throughput so there’s no single best GC algorithm. more like a menu of operational headaches to choose from
|
# ? Oct 20, 2018 06:02 |
|
Ralith posted:this is kind of a weird way to say "some data structures in the standard library call malloc or w/e when it makes sense to do so, and so can you." there's no compiler magic involved. i mean, there’s a ton of compiler magic involved, but it’s generalized to cover arbitrary value consumption/destruction
|
# ? Oct 20, 2018 06:17 |
|
Lutha Mahtin posted:i'm sure this is a really dumb noob question but, why do GC and GC-esuqe systems not defer/stagger the freeing of a shitload of objects versus making the decision "oh we have a shitload of objects to free, better fuckin halt everything until we nuke all of them right this moment" usually when people say 'garbage collector' they mean a tracing collector. with these you usually scan the stack and then recursively follow pointers to build a graph of all reachable objects on the heap. then you throw out everything else. the expensive thing here is building the graph. once you have that you really want to evict anything on the heap that isn't reachable to save you having to build the graph again for as long as possible you can improve on this with a generational collector tho. with these you take advantage of the fact that most objects are short lived and are only referenced from the stack and rarely from older objects. collecting just recently allocated objects nets you most of the benefits of collecting everything, but is much faster. you allocate them in one segment of the heap reserved for new objects and then move survivors of garbage collection passes to a different segment. the objects in the "survivor" (or "old") generation are only allowed to contain pointers to other objects in the "survivor" generation, so your runtime needs to intercept all pointer assignments and move anything referenced from the "young" to the "survivor" segment but this allows you to basically ignore the "survivor" generation during gc if you want. you only collect the "survivor" generation when you really need to. you can have multiple levels of this with objects getting promoted to segments that are collected much more rarely as they age this is how pretty much every garbage collector (note: assuming you don't count reference counting as gc) works in practice, outside of exotics ones designed for very specific use cases or ones for languages like haskell where you can make assumptions about pointers that aren't possible in more mainstream languages the talent deficit fucked around with this message at 06:36 on Oct 20, 2018 |
# ? Oct 20, 2018 06:33 |
|
the tl;dr version: there are several tradeoffs among:
there is no free lunch in garbage collection, ever. every gc strategy falls down somewhere. (there is also not a free lunch in avoiding garbage collection: malloc() and free() have extraordinarily unpleasant behaviors in corner cases. the C++ whackadoos who rave about "RAII" are just moving the problem into hairy profiling issues)
|
# ? Oct 20, 2018 06:41 |
|
pseudorandom name posted:yeah, I was going to ask about that earlier is there any precedent for freeing an object to throw it on a queue and another thread taking care of running the destructor? i.e. I guess GC finalizers but in a non-GC system while that stuff will usually just be done on a gc thread along with other things it will be outside the "stop the world" phase of the gc, so it is usually not a noticeable problem in any way just getting the things done in one go is good for throughput as well, avoid repeatedly having to basically trash the applications cache state by walking about random dead objects (same issue as with refcounting, which will often in practice turn out extremely expensive since it inserts a lot of pokes at memory which is not in cache and not otherwise useful to have in cache). it may seem like there is no gain in that the gc also wont have the dead objects in cache when it finishes the scan to determine they are dead, but at that point the cache contains mostly junk anyway (i.e. the final bits of the tracing) so it is no loss evicting it if the thinking is to defer cleaning up objects for a long time to get a huge amount of work done: why not defer the entire gc if you have that kind of memory to spare?
|
# ? Oct 20, 2018 09:33 |
|
Cybernetic Vermin posted:if the thinking is to defer cleaning up objects for a long time to get a huge amount of work done: why not defer the entire gc if you have that kind of memory to spare? Also known as the strategy for JVM benchmarks for a while.
|
# ? Oct 20, 2018 10:07 |
|
it is clearly the best bet for throughput, so not a bad strategy. as mitnick noted above there are a lot of valid tradeoffs
|
# ? Oct 20, 2018 10:26 |
|
Xarn posted:Also known as the strategy for JVM benchmarks for a while. they literally just introduced a new “null gc” option for this
|
# ? Oct 20, 2018 10:35 |
|
Notorious b.s.d. posted:the tl;dr version: there are several tradeoffs among: which malloc/free corner cases are you referring to? and while RAII isn't some magic bullet, it is a good concept. It allows for some great stuff like unique_ptr. There are many things wrong with c++, but RAII isn't one of em. I'm not sure what hairy profiling issues you are thinking of, but in my experience it makes it more obvious when you are allocating/deallocating memory. Perhaps someone doing stupid poo poo in destructors is causing you problems? If you need extremely low latency and high throughput (the niche that c++ lives in), I'd say there is a very small change you want to use a gc. It simply doesn't have the determinism you need to not have a pause at just the wrong time and kill your p100.
|
# ? Oct 20, 2018 12:28 |
|
then there’s the thing nginx does that I think is really cool: per-request arenas. bump pointer allocation and almost free deallocation if I made a lang it would be event handling only and the two object lifetimes you get are forever and until then end of the handler
|
# ? Oct 20, 2018 16:11 |
|
fwiw that’s a trick for some Erlang code. Allocate enough memory for a short-lived process doing some work to never need a GC, and as the process dies, it gets reaped for free. The risk of doing that with longer-lived processes and non-isolated memory is that you can really accelerate memory fragmentation.
|
# ? Oct 20, 2018 16:19 |
oh interesting, tim's virus also granted him free unlimited sentinels to have sex with
|
|
# ? Oct 20, 2018 16:20 |
|
jeffery posted:oh interesting, tim's virus also granted him free unlimited sentinels to have sex with you need to take your brain meds, friend
|
# ? Oct 20, 2018 17:27 |
|
|
# ? Jun 12, 2024 15:34 |
|
Soricidus posted:they literally just introduced a new “null gc” option for this that's been an option on the IBM JVM for many years sadly i had good reason to use it at a past job
|
# ? Oct 20, 2018 17:34 |