|
gonadic io posted:I own a [T; 8] and would like to access the Ts out of it without copying (this is important). Then I'd like to call a FnOnce(T) -> T on each one and package them back up in an array. I'm very confused by these requirements. Can you elaborate? For small data types, copying a pointer is likely to be no faster (and possibly slower) than copying. For large data types, it will probably have the compiler optimize it to a pointer chase rather than a full copy anyway. Also, is it a different array you're packaging the results in or the same one? Because if it's a different one you at LEAST need to require Clone, otherwise you're just re-implementing Clone with unsafe. Basically, I don't understand why your requirements are what they are, and why you can't use an FnMut. I know there are some issues with being unable to move out of mutable references, so code:
Linear Zoetrope fucked around with this message at 23:48 on Apr 20, 2016 |
# ? Apr 20, 2016 23:46 |
|
|
# ? May 15, 2024 03:17 |
|
It might well entirely be that I'm trying to do the wrong thing here too. The core problem is that I have a large tree of nested box pointers to SVO<Registered>, and I'd like to end with SVO<Unregistered>. Given that this is a potentially huge structure I'd like to do it with as little copying as possible so I've got deregister taking self by value as I'll never want to access the old SVO. The leaf case is easy, but given the array [Box<SVO<Registered>>; 8] I'd like to call deregister on each of them recursively (to get 8 Box<SVO<Unregistered>>s) and then collect them up. Given that it's only 8 pointers I'm not that fussed about reusing the old array's memory (but I suppose if I can I will). What I was asking in my post was how to avoid just manually indexing 8 times but if my approach is wrong I'd like to know how I could do it better!. However, changing new_octants to require a FnMut rather than a Fn seems to work perfectly. This is what I've got now, does it make sense? Does having the index be u8 actually help at all? The index can only be 0-7 so I kind of would like a u3 type so that invalid values can't be represented but then I always end up converting to usize to do any actual indexing. code:
gonadic io fucked around with this message at 10:03 on Apr 21, 2016 |
# ? Apr 21, 2016 10:00 |
|
I have an awful suggestion: use mem::transmute to avoid copying altogether. There are a lot of potential problems (it could cause bugs to appear if the structure of SVO changes later), but I don't think there is a faster option.code:
|
# ? Apr 21, 2016 18:16 |
|
What about the best of both worlds?code:
|
# ? Apr 21, 2016 20:56 |
|
So I'm porting some stuff from Go to Rust to see what it's like, and I'm trying to figure out what the Rust version of this code is. My Go code looks like this:code:
I'm trying to figure out how to do this sort of thing in Rust. So far it's been a billion errors about stuff not implementing Sized - apparently that's because traits by themselves aren't assumed to be a pointer to a struct that implements them as in Go, so I have to Box<> them. After I figured that out, it doesn't look like you can make global variables even if you wrap them in a Mutex and I need to use the lazy_static crate, but that doesn't work because Decoder doesn't implement Send so I needed to make my Boxes Box<Decoder+Send>. I'm trying to figure out why this is such a mess and what I'm doing that's so very wrong. What I'm actually trying to do is make a bunch of struct types that implement a trait called Decoder (e.g. XMLDecoder, JSONDecoder) and then dump the output from a TOML parser into a method to configure each one and return a configured struct instance, then register them by a string name for later reference so I can pass a slice of bytes into the decode method from a bunch of different "reader" type things (e.g. I query a bunch of APIs and pass the result through a different decoder depending on some configuration on the reader side). The decode method never mutates its parent object, it just reads configuration from it, so it should be thread-safe to call decode from a whole bunch of threads at once against the same decoder. The end result is it spits out a JSON-esque enum object (the Message). I was initially attracted to Rust because of the Enum concept and not having to use interface{} for everything and playing with reflection to see what types are inside the interface{}, but it definitely has a learning curve. So far what I've got is this: code:
Urit fucked around with this message at 04:00 on Apr 24, 2016 |
# ? Apr 24, 2016 03:52 |
|
I guess my first question is why do you want global state? Can you not get away with passing things around as needed?
|
# ? Apr 24, 2016 06:57 |
|
Ethereal posted:I guess my first question is why do you want global state? Can you not get away with passing things around as needed? I don't NEED it to be global (I can just create 1 registry instance and then pass it around somehow I suppose), I just need a single "registry" that the configured structs can get loaded into so I can reference the configured struct from configuration later. I thought about it some more and holy poo poo this is way different not having a garbage collector letting me poo poo objects everywhere. In the Go code all the stuff registers itself on init e.g. code:
The config looks something like: code:
Also I'm running into lifetime errors trying to get a value out of a map and pass it back to a caller. I guess it makes sense because the map is holding onto that value and the value could be deleted, so then the caller would be referencing freed memory. I guess I have to wrap the whole thing in an Arc<Decoder> instead and clone it for every consumer of the decoder. Urit fucked around with this message at 08:19 on Apr 24, 2016 |
# ? Apr 24, 2016 07:08 |
|
Could you explain what BindFunc and Registery::binders are? Edit: I don't totally understand what you're going for, but this is how I would go about implementing a global registry: code:
syncathetic fucked around with this message at 09:05 on Apr 24, 2016 |
# ? Apr 24, 2016 08:40 |
|
syncathetic posted:Could you explain what BindFunc and Registery::binders are? Bindfunc takes a TOML parse tree and turns it into a configured struct. It's basically a constructor - I just called it a bindfunc because Go doesn't have OO style class-based constructors and you can't scope a function to a type namespace easily like rust's <whatever>::new() inside the impl block. I was "binding" config values to struct values. A binder for a UDP listener looks like: code:
Now, your code: Thanks so much, and that's very similar to what I'm trying to do, but what the heck is decoders_cache and why is it borrowing a deref (&*) in a map call? I am confused as to why I can't just return the reference directly from the map.get() call. Also why are the hashmaps to a "usize" instead of a Box<Decoder>, and how would I insert a decoder into them? The thing is that each decoder itself is responsible for calling Register, though again, I'm not sure how I'd do that in Rust because you can't call functions in an "init" or global context as far as I can tell - maybe via std::sync::Once? I'd have to call a constructor on each decoder and add it to the map, correct? Maybe I'm just doing this hilariously wrong - given the problem, is there a better way? The problem is: take configuration and build structs from that configuration, then allow a configured struct to reference another configured struct. Assume that ordering of config is not an issue e.g. if struct type A depends on struct type B then all structs of type B will always be configured first. This is basically dependency injection, I think. Edit: Got it working: https://gist.github.com/highlyunavailable/f8424d2881e2d7b2d510d114a57ed9c3 It still feels like I'm doing it wrong somehow. Urit fucked around with this message at 21:52 on Apr 24, 2016 |
# ? Apr 24, 2016 19:03 |
|
Double-posting time! I think I have a better (read: more idiomatic) version now: https://gist.github.com/highlyunavailable/0dab6e17bbace8fd10fa7c2e2f121d27 It seems like lifetimes are doing what I want - each item in the registry must last as long as the registry itself (there is no possibility of deleting it), so as long as the registry is in scope, I can guarantee that the items will not be deallocated. I'm not sure how this will interact with threads but I will only be filling the registry once in a single thread and the registries themselves will live in the "main" thread (the config will be read single-threaded) and then hopefully I can pass off an immutable reference to the decoder (via get() or default()) to another thread. I'm still not sure how to actually do the registration but worst case I just have a big old populate function in each module that I manually add each submodule to, or an init function in each submodule that the populate function calls.
|
# ? Apr 25, 2016 19:14 |
EDIT: Got it. Leaving old buggy playground link for prosperity. For future self, used std::str::SplitWhitespace. http://is.gd/PMopF8 Jo fucked around with this message at 20:35 on May 5, 2016 |
|
# ? May 5, 2016 18:47 |
|
Finally getting the chance to fart around with this language. It's really cool so far! One thing that impressed me is the operator overloading - I like that I can define an operator with two different types as the operands and even a third type as the sum type.
|
# ? May 10, 2016 17:28 |
|
Yeah it's funny how much effort they go through for that when C++ gets it just by overloading.
|
# ? May 11, 2016 01:24 |
I'm bumping my head against a module error. Most of the people online seem to say "modules are confusing" and I agree with that sentiment. I have the following structure: code:
code:
code:
code:
If I use instead `use geometry::*;` I get this: code:
EDIT: It looks like the problem is I've got main.rs -> app.rs -> geometry.rs. I can't use the flat mapping with this setup and have to use directories. My new (messy) directory structure is this: code:
app.rs references settings.rs and geom.rs. settings.rs references geom.rs. Jo fucked around with this message at 00:53 on May 17, 2016 |
|
# ? May 16, 2016 20:48 |
|
Jo posted:I'm bumping my head against a module error. Most of the people online seem to say "modules are confusing" and I agree with that sentiment. For that to work out you want no mod lines anywhere but main.rs, main.rs has mod app; mod geometry; mod settings;, and everybody else has use app; use geometry; use settings; as necessary. mod items are for defining your crate's tree structure, you pretty much only ever want one mod line per module in your whole crate, probably at the highest level at which you want that module to be used. use items just bring stuff into scope so you don't have to use absolute paths that start with :: everywhere. Vanadium fucked around with this message at 01:03 on May 17, 2016 |
# ? May 17, 2016 01:00 |
Vanadium posted:For that to work out you want no mod lines anywhere but main.rs, main.rs has mod app; mod geometry; mod settings;, and everybody else has use app; use geometry; use settings; as necessary. Thank you so much! That did it.
|
|
# ? May 17, 2016 06:00 |
|
I'm having quite a lot of trouble with what the types of closures are. I havecode:
My current error is: code:
Also I tried to make the functions inside the RegistrationFunctions object be references instead of on the heap but apparently the references to the Fn types might outlive the Fns themselves? Do Fns have a hidden lifetime parameter? e: to recall, Unregistered has the _padding parameter so that I can potentially transmute between Registered and Unregistered.
|
# ? May 19, 2016 13:07 |
|
I reread the closure tutorial and finally grokked the comment about how closures are implemented by the compiler constructing bespoke structs for each one and then implementing the Fn traits. So after just Boxxing everything, I ended up with code:
|
# ? May 22, 2016 12:59 |
|
Yeah you can't really name closure types so you can't reasonably return them out of generic code, so you always have to be generic over them. Fundamentally, fn foo<T: Fn()>() -> F can't possibly work because foo promises to be able to return a closure of any type of the caller's choosing, which it obviously can't. The inverse where foo returns a closure of a specific secret type that the caller just has to deal with can't currently be expressed, except with boxing+type erasure it like you're doing.
|
# ? May 22, 2016 13:43 |
|
Actually this didn't quite work, and the Boxes required lifetime parameters like so: (otherwise the closures were given static lifetimes) code:
gonadic io fucked around with this message at 16:10 on May 22, 2016 |
# ? May 22, 2016 15:55 |
|
I'm actually running into a real design issue here around FFI: I'd like to deregister a SVO, changing it's type code:
code:
|
# ? May 22, 2016 17:26 |
I'm sorta' stuck on generics and inheritance now. I'm defining `trait Node` which has a few attributes. I'd like to make a Graph struct which has a HashMap <String, Node> in it. Is there any way to make graph accept any mix of types, so long as they implement Node? Box them?
Jo fucked around with this message at 01:22 on Jun 2, 2016 |
|
# ? Jun 2, 2016 01:16 |
|
Jo posted:I'm sorta' stuck on generics and inheritance now. I'm defining `trait Node` which has a few attributes. I'd like to make a Graph struct which has a HashMap <String, Node> in it. Is there any way to make graph accept any mix of types, so long as they implement Node? Box them? Yup, you have to box traits because they're just a pointer to a thing that implements Node, so you don't know what size they are, which means you can't properly know what size they're going to be on the stack, so you have to put them in a Box (on the heap).
|
# ? Jun 2, 2016 02:04 |
|
E: NM
Linear Zoetrope fucked around with this message at 10:38 on Jun 6, 2016 |
# ? Jun 6, 2016 10:32 |
|
Is there a nice way to initialize an array with values from the concatenation of two constant arrays or slices of constant arrays? I'd like to do something like this: code:
code:
e: #rust says that it isn't possible yet taqueso fucked around with this message at 18:58 on Jun 6, 2016 |
# ? Jun 6, 2016 17:46 |
I noticed recently that when developing I'm "just getting it to compile". That might mean adding & in places or doing foo.to_string() instead of foo. Am I going to be leaking memory like crazy or painting myself into a corner, or can I assume reasonably correct behavior as long as the code compiles (assuming no logic bugs). Rust's doc makes it seem like leaking memory is acceptable, but I'm in a weird state of wanting to free stuff without really having a way to do so aside from drop().
|
|
# ? Jun 7, 2016 23:59 |
|
I mean, there are no guarantees about where memory is freed except that it will always be freed sometime between when it's never referenced again and when it goes out of scope, the compiler is free to optimize around that AFAIK, but it will always be dropped by the time it goes out of scope (except for weird cases involving circular reference counters). I'm not sure what you're really asking about. If you absolutely need to free memory NOW and you can't use scoping to achieve it, that's what drop is for. But yes, you should assume memory is alive until you explicitly call drop or the object goes out of scope. Also, apparently (&str).to_owned() is faster than (&str).to_string() for some reason.
|
# ? Jun 8, 2016 00:13 |
Jsor posted:
No longer true on nightly! Specialization allowed for this to be fixed.
|
|
# ? Jun 8, 2016 00:17 |
Jsor posted:I mean, there are no guarantees about where memory is freed except that it will always be freed sometime between when it's never referenced again and when it goes out of scope, the compiler is free to optimize around that AFAIK, but it will always be dropped by the time it goes out of scope (except for weird cases involving circular reference counters). I'm not sure what you're really asking about. If you absolutely need to free memory NOW and you can't use scoping to achieve it, that's what drop is for. But yes, you should assume memory is alive until you explicitly call drop or the object goes out of scope. I'm just worried about building a heaping pile of poo poo because I'm "just getting it done" instead of taking the time to jump back into the docs and running the analytical route. This is entirely a personal project, so I'm trying to concern myself with more of the architectural aspects than I am with the details of the code. It's a prototype engine based on Glium for Awful Jam next month, if that makes a difference. Rust is in a strange place for me because it sits right between the managed stuff I've done in Python and Java and the completely manual stuff I've done in C. I feel like I should be calling malloc and free and worrying about & vs * vs [], and the fact that I can just kinda' write whatever, fix the compiler warnings, and have it work means I'm suspicious that I'm missing something fundamental.
|
|
# ? Jun 8, 2016 18:17 |
|
Rust's memory management is based on the observation that types aren't the only things you can enforce at compile-time. The work of ensuring memory consistency is still being done; it's just that a large part of that work was put into the compiler, and the part that remains for the end-user to do takes the form of appeasing the compiler.
|
# ? Jun 9, 2016 00:36 |
|
Or the end user could understand what's going on. You should be able to "see the allocations" -- they're as visible as they are in C++.
|
# ? Jun 9, 2016 03:42 |
|
sarehu posted:Or the end user could understand what's going on. You should be able to "see the allocations" -- they're as visible as they are in C++. Well, yes. I only mean that the fact that having a machine checking your work makes it easier.
|
# ? Jun 9, 2016 04:03 |
|
I'd like to be able to create byte literals using a different notation than the standard, for 'drawing' in an array of bytes. This is to allow creation of bitmap fonts in the code. I'd like to be able to do something roughly like this:Rust code:
Rust code:
|
# ? Jul 9, 2016 00:56 |
|
I don't think this is macro-able, but this is practically the example for compiler plugins in the book (they use Roman Numerals, but it's the same idea of parsing identifiers to make numbers).
|
# ? Jul 9, 2016 01:02 |
|
Jsor posted:I don't think this is macro-able, but this is practically the example for compiler plugins in the book (they use Roman Numerals, but it's the same idea of parsing identifiers to make numbers). Thanks for pointing that out, it does seem like a good fit and the right place to do this. And if I understand this right, I would make a plugin library that will function like any other lib within the standard cargo dependency system? Rust is too cool.
|
# ? Jul 9, 2016 01:19 |
|
I actually wrote it:code:
XXX = 0b1110_0000 X_ = 0b1000_0000 XXXXXX_X = 0b1111_1101 Right? What I wrote passes all of your test cases, if I got it wrong I at least gave you a point to start from. As a note, if you instead treat everything as starting from the rightmost bytes (that is XXXX is 0000_1111 instead of 1111_0000) you can trivially extend this to an arbitrary usize, but the way you wrote it makes that much harder. Linear Zoetrope fucked around with this message at 01:43 on Jul 9, 2016 |
# ? Jul 9, 2016 01:40 |
|
I wrote it too. It was really easy, since that example was almost perfect. https://github.com/jdeeny/drawbytes This needs to follow the Chip8 defacto standard for the fonts, so it needs to be the most significant bits ('left justified'). I was mulling over how to make it a little more configurable, maybe an optional format string, like 'u8L' or 'u32R'. e: I had to add plugin = true to the lib section of the plugin's Cargo.toml taqueso fucked around with this message at 03:54 on Jul 9, 2016 |
# ? Jul 9, 2016 02:35 |
|
Rust code:
e: I found get and get_mut in the docs, so I can use code:
taqueso fucked around with this message at 06:45 on Jul 12, 2016 |
# ? Jul 12, 2016 05:19 |
|
I'd write it more like:code:
QuantumNinja fucked around with this message at 06:13 on Jul 28, 2016 |
# ? Jul 28, 2016 06:10 |
|
|
# ? May 15, 2024 03:17 |
|
QuantumNinja posted:I'd write it more like: I don't want to panic inside my library for a minor error like this. I'm trying to pass the results all the way out to the API boundary so the calling app can see & handle the errors. I considered using Option here, but didn't use it for 2 reasons - I read somewhere that it was bad form to use None to indicate an error condition, and also so I don't have to convert the Option to a Result in the calling functions. This is pretty analogous to std's use of None when popping from a collection, so it is mostly for the second reason. The library is pretty functional now, I'll try to post the code later today.
|
# ? Jul 28, 2016 16:29 |