|
it came about in pre-core asp.net because of iis request thread pooling reasons. i've never seen it be a problem in dotnet core and i don't know why you wouldn't use it? in the python space it's definitely a mess though
|
# ? Oct 18, 2021 15:23 |
|
|
# ? Jun 10, 2024 05:16 |
|
Sapozhnik posted:yeah backend async/await is bad in most situations, don't use it unfortunately most c# packages use async/await everywhere and you can get deadlocks if you call it from non async code. it sucks but it's much easier to make everything async if you can, and thus the async disease propagates through everything you write too
|
# ? Oct 18, 2021 15:26 |
|
monads, the syphilis of programming
|
# ? Oct 18, 2021 15:31 |
|
Sapozhnik posted:yeah backend async/await is bad in most situations, don't use it how can you be so wrong? the threadpool runs multiple threads and all your db-calls, io, or network stuff doesn’t need a thread and should be non blocking. the webserver is built on the assumption that it can handle more requests than it has threads in the threadpool but if you keep blocking then it’s going to end up getting thread starved. async/await is good and you should use it wherever it’s possible
|
# ? Oct 18, 2021 15:54 |
|
NihilCredo posted:you're the second guy that says that async should stay in ui land and backend code should be synchronous and it baffles me But they're not spinning, they're blocked. You can have far more OS threads than CPU cores if you want. If one request equals one OS thread then you get comprehensible stack traces, and you can resource limit those threads as well so that one runaway request doesn't soak up all of your CPU. Put simply there's a hard separation between requests when you run them on OS threads that is not quite as hard when you shred all of their processing into tiny pieces and mix them into one asynchronous IO work pool. Email submission to your outbound gateway certainly shouldn't be taking a long time. Heavy queries on the database backend, well, the client is blocked anyway, and you have to store your request's execution state somewhere in the meantime, whether that's in the request thread's stack or in continuation ravioli on the heap. If you have a tunable thread pool for your application server instances then it also acts as a natural limit for the number of concurrent requests your application has in flight; if your backend is backlogging then your thread pool will naturally fill up and back-pressure the clients as well. You would have to explicitly implement a concurrent request rate limiting scheme in an async/await application. I'm only really seeing upsides for synchronous thread pools versus a continuation-passing free-for-all.
|
# ? Oct 18, 2021 15:56 |
|
i reckon that guy probably fires off one db request, waits for it to complete, then fires off a second db request, then waits for it to complete, tying up a thread the whole time and when clients complain that the site is slow he just shrugs and suggests that there's nothing that can possibly be done about it, that's just how computers work
|
# ? Oct 18, 2021 15:58 |
|
this thread really is for terrible programmers jesus christ
|
# ? Oct 18, 2021 16:02 |
|
call me when your OS is happy with 100k runnable threads
|
# ? Oct 18, 2021 16:04 |
|
no matter if you are blocking in your request handlers or not you can end up starving the application of threads, but if you are blocking when you don’t need to then you are wasting a lot of time that could be spent handling more requests. quite recently I was involved in trying to fix performance problems in an old asp.net app were they were blocking a lot and according to perfview the time spent was over 50% blocked, followed by actual cpu and then network. one “solution” was for us calling this system to rate limit ourselves, because they couldn’t handle close to as many simultaneous users…
|
# ? Oct 18, 2021 16:12 |
|
isn't "async/await" just a needlessly complicated way to say "coroutine"? coroutines own and I wish we had access to c++20 coroutines in our codebase like the "please, entity, load all your resources and poo poo" stuff, which may involve spawning other entities and waiting for them to finish their own initializations and it's done as a method called every frame that returns a bool to indicate if it's finished and the code inside is usually either some horrible bespoke state machine or doesn't bother and do everything in a single frame and dumps a big turd in the profiler there are some monstrosities like that that I am going to have to split up and I wish I could just insert some co_yield (lol) to make it stop and come back later without rewriting half of this poo poo
|
# ? Oct 18, 2021 16:17 |
|
TwoDice posted:call me when your OS is happy with 100k runnable threads echo 100000 > /proc/sys/kernel/threads-max
|
# ? Oct 18, 2021 16:19 |
|
probably need to increase vm max map count too for it to be "happy"
|
# ? Oct 18, 2021 16:20 |
|
Zlodo posted:isn't "async/await" just a needlessly complicated way to say "coroutine"? sometimes? that's part of the fun, figuring out what differences are hidden by the similar syntax when someone means by async/await like I assume javascript promises are isomorphic to coroutines but they don't feel that similar to me when using them
|
# ? Oct 18, 2021 16:22 |
|
hobbesmaster posted:probably need to increase vm max map count too for it to be "happy" it's probably gonna be sad either way
|
# ? Oct 18, 2021 16:26 |
|
NihilCredo posted:you're the second guy that says that async should stay in ui land and backend code should be synchronous and it baffles me no, i said non-ui code in swift should just be synchronous. that is a very different thing from saying that backend code should be synchronous because no one actually writes swift on the server. your ios app is not going to have so many pending network requests that having a blocked thread for each one is actually a problem, and if you somehow do then it's a sign you should make a less lovely API for talking to your server. there's all this poo poo that's been developed because it's what you have to do to handle 100k incoming requests that just doesn't make any sense when you have zero incoming requests and average well under one outgoing request per second. Plorkyeran fucked around with this message at 16:41 on Oct 18, 2021 |
# ? Oct 18, 2021 16:39 |
|
DoomTrainPhD posted:As you should. It’s far better than Java. lol
|
# ? Oct 18, 2021 16:40 |
|
|
# ? Oct 18, 2021 16:44 |
|
python is fine for certain use cases but lol at calling it "better" than java
|
# ? Oct 18, 2021 16:53 |
|
theyre about the same bad, imo. python gets better as the ratio for developer price:computer price increases
|
# ? Oct 18, 2021 17:07 |
|
like anything there are pros and cons and being aware of them will let you use either effectively feel like i say some variation of this once a month given i work with both java and python all the time Share Bear fucked around with this message at 17:18 on Oct 18, 2021 |
# ? Oct 18, 2021 17:16 |
|
PIZZA.BAT posted:python is fine for certain use cases but lol at calling it "better" than java to be clear, this isn't necessarily bad for some use cases like throwing a quick script together but anything beyond that, i just don't get it. I mean people also write web servers in bash I guess, that's a bad idea too
|
# ? Oct 18, 2021 17:22 |
|
fwiw python only holds the gil when it is executing bytecode and releases it during any type of i/o. turns out most of running a webserver is waiting on i/o you can also release it yourself if you want and plenty of projects do this
|
# ? Oct 18, 2021 17:30 |
|
There's apparently a new proposal to remove the GIL from CPython going around. It's slightly pregnant, uh, I mean, slightly ABI-breaking, but this is Python land where nobody even cares about API stability, never mind ABI stability. I'm paraphrasing from the LWN article but supposedly the main problem is the increased overhead of atomic reference count increment/decrement ops in the absence of a GIL, so there are some optimizations to do with splitting the reference counts and not reference counting stuff like the True and False singletons. Hopefully Guido doesn't get up in this guy's face in the middle of a presentation about it this time.
|
# ? Oct 18, 2021 17:39 |
|
12 rats tied together posted:fwiw python only holds the gil when it is executing bytecode and releases it during any type of i/o. turns out most of running a webserver is waiting on i/o
|
# ? Oct 18, 2021 18:53 |
|
the whole impetus for async/await (or non-synchronous call-stacks in general) on the backend is that there are very real limits with the # of threads an OS can realistically handle. the one thread-per-request model is just super inefficient for high throughput stuff and is a very 2000's mindset
|
# ? Oct 18, 2021 18:55 |
|
because if you know python your odds of conning yourself into a $$$$$ machine learning gig increase!
|
# ? Oct 18, 2021 18:56 |
|
Sapozhnik posted:There's apparently a new proposal to remove the GIL from CPython going around. It's slightly pregnant, uh, I mean, slightly ABI-breaking, but this is Python land where nobody even cares about API stability, never mind ABI stability. it's going to be amazing when packages break because subtle logic errors that were guarded by the GIL become exposed if you run in --gil-free mode or whatever
|
# ? Oct 18, 2021 19:06 |
|
Sagacity posted:ok so...why go for python if it's being sold as "ok when mostly doing I/o, lovely locking if you have any business logic" what have you made where youve run up against python performance problems? i made a distributed syndication process that fires data via http to a ton of my place of works partners and other internal services that couldnt be hooked up to kafka i got stuck on single threaded business logic so i used coroutines and subprocesses to do so more effectively, the bigger bottleneck for it is network bandwidth now or is this a “i wouldnt use python to begin with cause its slow” comment works4me
|
# ? Oct 18, 2021 19:11 |
|
Share Bear posted:or is this a “i wouldnt use python to begin with cause its slow” comment OTOH works4me is a very strong endorsement and it's fair enough if it does!
|
# ? Oct 18, 2021 19:22 |
|
familiarity and momentum is important, and whatever that may be for you is fine, generally developer time is more expensive than computer time after all. i can crank out terrible proofs of concept/mvps in this lang pretty well, and they also generally work for the loads theyre given without tweaking or forking a ton of child procs or whatever it only matters when it matters, and that's not as often as you'd think. or maybe i only work on small potatoes?
|
# ? Oct 18, 2021 19:31 |
|
the real answer to this question is simple - if you ask someone at any of the big tech places these days that started, or are currently using, a dynamic language for their platform they're going to tell you that facebook/instagram/github picked php/python/ruby because they knew those tools. nobody picks a language in a vacuum to give you less of a fake non-answer, the "standard" web service deployment these days minimizes time spent executing in-process business logic. as a code toucher you had to learn that order of magnitude chart for seek times right? cache hit vs l1 cache vs memory read, and so on? a networked application by definition spends time waiting on the network. time spent waiting on the network dominates most forms of efficiency gains you would realize by either choosing a different language or by tuning your python setup. there are also tons of techniques around compensating for the gil. most python web servers fork into a bunch of workers that can handle some number of requests based on their type. since handling an http request is a concurrency (not parallelism) problem, the best one of these tends to be based on greenlets which solves for arbitrarily high requests-per-worker, you can tune your web servers to run arbitrarily high workers-per-server, and then your load balancer to run arbitrarily high servers-per-endpoint. since it's so trivial to wait on network i/o with greenlets you also often see people shoving values into network caches, message buses or event stores, or minimizing CPU time locally by letting e.g. the postgresql query planner do as much of the hard work as is possible. for needs that are primarily focused on local access to cpu/disk/memory, you usually see modules made available as python bindings for an os-native language. scipy is C/Fortran, numpy is c/c++, cuda is cython which is kind of a hybrid python/c language, and so on. as a systems janitor who has janitored many a framework and language, my experience is that cost per memory tends to be the controlling factor in scaling python past whatever chokepoint the company is going to encounter next, it's never been like, the time it takes to copy stuff out of the http request into a class instance.
|
# ? Oct 18, 2021 19:38 |
|
We run a single node process per instance and it works fine from a throughput perspective normally. Like I'd much rather be running dotnet core (we tested and the perf was better for a sample use case) but it isn't a major issue for us. Except if we gently caress up two things: * Autoscaling. We run on two vcpu instances but someone set the autoscaling threshold at 60%, so it never fired * If you gently caress up and do deploy slightly slow code the event loop rapidly fills and you get less margin than if you could use all the vcpus. I feel like this interacts badly with load balancing in some way but am not smart enough to figure it out. Maybe we'd be better off running on 1 vcpu instances but we were having performance issues with them (I think due to excessive context switching with secondary services idk).
|
# ? Oct 18, 2021 20:08 |
|
|
# ? Oct 19, 2021 00:20 |
|
xkcd is so bad even when it is an edit that i think you should not post them
|
# ? Oct 19, 2021 00:56 |
|
goatkcd or nothing imo
|
# ? Oct 19, 2021 00:58 |
|
Jabor posted:goatkcd or nothing imo
|
# ? Oct 19, 2021 01:16 |
|
there used to be a forum that had a sticky thread "post goatkcd and you're banned" stickied for years it was great
|
# ? Oct 19, 2021 01:16 |
|
oh it was the official xkcd forum lol
|
# ? Oct 19, 2021 01:29 |
|
CRIP EATIN BREAD posted:oh it was the official xkcd forum lol who tf posts on the xkcd official forums
|
# ? Oct 19, 2021 03:37 |
|
|
# ? Jun 10, 2024 05:16 |
|
losers, nerds me 15 years ago
|
# ? Oct 19, 2021 03:41 |