Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
crazypenguin
Mar 9, 2005
nothing witty here, move along

Dominoes posted:

Looking for critique and suggestions for a web framework I'm working on. Repo. Mainly regarding the guide (Readme) and API.

Is there a reason you're not having people use wasm-pack?

Adbot
ADBOT LOVES YOU

crazypenguin
Mar 9, 2005
nothing witty here, move along
There are two main crates that help deal with Results: anyhow and thiserror.

anyhow is great when you’re just trying to consume errors (eg applications) and thiserror is great for building libraries that have to return errors.

The end result is quite pleasant. I feel like it’s a good middle ground between checked and unchecked exceptions. You’re forced to deal with errors all the time, but the common case is “just write ?” and that does the right thing.

crazypenguin
Mar 9, 2005
nothing witty here, move along
With clap’s derive feature, it should just be an enum.

crazypenguin
Mar 9, 2005
nothing witty here, move along
Are you doing web assembly or something?

That crate does have a fetch_blocking that just returns the response. Maybe that's all you need. And it also recommends other crates for promises in its docs.

But it also seems like an odd crate choice (not sure what would be the typical choice here though... reqwest?)

Anyway, if you want to fix up that code, you'll have to do something different for sure. You haven't written any code that tries to accommodate the fetch happening in another thread, you just try to return the result that you might not have yet (which is why Rust is complaining).

crazypenguin
Mar 9, 2005
nothing witty here, move along
Also be suspicious of urls with something like "1.0.0-alpha" in them.

https://doc.rust-lang.org/book/

crazypenguin
Mar 9, 2005
nothing witty here, move along
When you own a type, Rust lets you get away with "partial moves." This is why changing it from `&mut self` to `self` would work. When it's fully-owned, you can track what parts are or are not valid memory, and get all the details right.

When it's not owned, but an exclusive reference (`&mut`), in principle you can do the same tracking. BUT, now the object continues to exist (instead of being de-allocated) if the function returns. This means there's a problem when you have panics and returns happen in unexpected places. If Rust did allow partial moves on an `&mut` value, you'd have to get the equivalent of C++ exception safety right. Which is to say, the rules are:

1. You're hosed.
2. ABANDON ALL HOPE

This is part of why things like `mem::replace` and `mem::take` exist. You can "swap" out something you have an exclusive mutable reference to, and now you own it and can do whatever owner things you want. You just need some valid placeholder to leave in its place, like Indiana Jones's bag of sand.

Every time you use these you should think for a moment about "unwind safety." Most of the time, this doesn't matter, so there's nothing to do! Rust is forcing you to ensure no memory safety errors occur here, but that doesn't mean some other error could happen if you panic in between `take` and putting the intended value back. But it only really matters if you plan on catching panics and continuing, which is relatively rare. (If you do need unwind safety here, you can resolve some of these problems through the trick of creating a 'holder' object that does the initial `replace` and does a `replace` back as part of its `drop` implementation. That way, panic unwinding (which runs drops) will still put back the intended value instead of the placeholder. You'll find this pattern in the `std::Vec` implementation in some places.)

crazypenguin
Mar 9, 2005
nothing witty here, move along
I haven't.

At the risk of going wildly off topic, I'm really uninterested in any of the generative crap.

But I am very excited about the possibilities for LLMs in creating ultra-specific search engines. I wasted 4 hours once trying to find the function in the 'petgraph' library for computing a strongly connected component graph. I knew it had to be there, but in the end I had to give up on the docs and clone the repo and grep. (The function is called 'condensation' apparently?? And no search for 'scc' or 'component' will find it in the docs!%&$)

Domain-specific searching seems like a real potential killer app for these things. Let me search my dependencies with a spoken voice query in an IDE, that'd rule.

crazypenguin
Mar 9, 2005
nothing witty here, move along

prom candy posted:

Is there a reason there's no like Rust on Rails or Laravel equivalent or whatever. Like a batteries-included Rust web framework? I've been reading through the Rust book and everything about the language is so appealing to me but it seems like lots of people think it would be nuts to use Rust for that kind of higher level web programming. I guess I don't understand why because coming from the Ruby on Rails world I would love to have a rigid type system, Result types, and extremely good performance. Is it just because the community and ecosystem isn't there or does the day-to-day of working with Rust make it unsuitable for that type of work?

I believe Rust generally, because of the narrative that it's a systems language comparable to C++, is currently generally used for more back-end stuff. Consequently, the library ecosystem there is much more mature (e.g. axum).

So for more app frameworky stuff, I think we're just waiting on (1) the library ecosystem to march in that direction over time and (2) to some extent, some of the ergonomic problems with async to be more fully resolved.

e: I also just want to say I agree with your impression, and I don't think there's anything (major) that makes Rust unsuitable. I think these frameworks will come.

crazypenguin fucked around with this message at 18:43 on May 25, 2023

crazypenguin
Mar 9, 2005
nothing witty here, move along
Axum definitely appears to be the default choice for building web service stuff in Rust at this time.

If you're pretty excited about Rust and kinda new still, I definitely recommend just casually looking over the projects in the tokio-rs org on github: https://github.com/tokio-rs

What's there (vs elsewhere) is of course a messy social process, so it's not like a one-stop shop, but they're all at least interesting, and use other interesting stuff.

crazypenguin
Mar 9, 2005
nothing witty here, move along
Databases have enum columns types as well.

crazypenguin
Mar 9, 2005
nothing witty here, move along
The best place to start is with the compiler error message, and explain what you don't understand or why you think it shouldn't be complaining about that.

crazypenguin
Mar 9, 2005
nothing witty here, move along
Fun Rust fact: it'll figure out initialization of a variable from a subsequent statement, even two branches of an `if`, so no dummy initializer is required (nor `mut`!!).

Rust code:
        let output : &str;

        if self.cache.contains_key(&id) {
            output = self.cache.get(&id).unwrap();
        } else {
            self.cache.insert(id, String::from(default));
            output = default;
        }

        return output;
But of course, we could just move the initialization expression to the RHS of the let = as well. But then we're just directly returning a let bound variable, so we could just... have that as the final expression.

Rust code:
    pub fn get(&mut self) -> &str {
        let id = 42;
        let default = "foo";
        if self.cache.contains_key(&id) {
            self.cache.get(&id).unwrap()
        } else {
            self.cache.insert(id, String::from(default));
            default
        }
    }
And then of course, we can use `entry`, but the original question was already aware of that approach. But I guess for completeness for anyone casually curious about Rust and reading the thread:

Rust code:
self.cache.entry(id).or_insert_with(|| default.to_owned())

crazypenguin
Mar 9, 2005
nothing witty here, move along

Jo posted:

I have four ideas for solutions but none of them seem very elegant:

1) Go back to the approach that used channels, but instead of just tx/rx have a tx/rx for sending images AND a tx/rx for 'returning' images which can get reused. Pretty elegant in theory and the lack of locking probably will be faster. Problem: having to return images after being done processing feels too much like manual memory management.
2) Pass a lambda into the read_next_frame_blocking. I rather like this, but I think using closures to do stuff could lead to more hardship in the future.
3) Have another fake image that I swap into the Vec in place of the actual image, then swap it back at return time. This feels risky and dumb.
4) Go back to the channel approach. Bummer to throw away the work and bummer to have the allocations, but it's maybe the nicest going forward.

I just want to note that you never actually stated what the problem was. I assume that code didn't borrow check or work because of the returned reference lifetime, because that's what it sounds like from these solutions.

I agree (4) should be out, there *should* be a workable enough way to avoid the allocation here. I'm assuming the performance would be nice here. :)

All of 1,2,3 seem like reasonable choices. I don't think 3 should be considered that terrible, considering it's something like what std::mem::replace/take do, so it's sort of a "pattern" in Rust.

(2) is probably what I would have chosen. I'm curious why you think it'd be a problem in the future? This is the most like many other solutions to "I want a reference to it but you can't *return* a reference".

The other maybe-but-haven't-thought-it-through possibility is something like MutexGuard. Return not the reference but something that's basically a reference but now has a lifetime parameter that's correctly constrained in the sense of "lives longer than the function call, but shorter than self's lifetime and shorter than next frame or whatever".

(Also, bit sus for an atomic load/store. Even if purely single threaded I'd probably write a function to do that increment with a compare and swap, just to leave my mind at ease...)

(Also also, I assume there are existing ring buffer-based channel implementations, are you sure you need a custom one?)

Adbot
ADBOT LOVES YOU

crazypenguin
Mar 9, 2005
nothing witty here, move along
Locking stdout is purely a process-internal thing, not something with some kind of OS-level effect.

...if that's what you're wondering.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply