Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer
it came about in pre-core asp.net because of iis request thread pooling reasons. i've never seen it be a problem in dotnet core and i don't know why you wouldn't use it?

in the python space it's definitely a mess though

Adbot
ADBOT LOVES YOU

Chalks
Sep 30, 2009

Sapozhnik posted:

yeah backend async/await is bad in most situations, don't use it

in user interfaces it's nice tho

unfortunately most c# packages use async/await everywhere and you can get deadlocks if you call it from non async code. it sucks but it's much easier to make everything async if you can, and thus the async disease propagates through everything you write too

animist
Aug 28, 2018
monads, the syphilis of programming

zokie
Feb 13, 2006

Out of many, Sweden

Sapozhnik posted:

yeah backend async/await is bad in most situations, don't use it

in user interfaces it's nice tho

how can you be so wrong? the threadpool runs multiple threads and all your db-calls, io, or network stuff doesn’t need a thread and should be non blocking.

the webserver is built on the assumption that it can handle more requests than it has threads in the threadpool but if you keep blocking then it’s going to end up getting thread starved.

async/await is good and you should use it wherever it’s possible

Sapozhnik
Jan 2, 2005

Nap Ghost

NihilCredo posted:

you're the second guy that says that async should stay in ui land and backend code should be synchronous and it baffles me

like, don't you ever send emails from your backend (with or without an email service)? you don't upload files? you don't run complex queries that take a second or more from your dedicated database server? do you just buy the biggest 512-core server your employer's money can buy and let half of those threads spin while waiting for the external service to complete the request?

But they're not spinning, they're blocked. You can have far more OS threads than CPU cores if you want. If one request equals one OS thread then you get comprehensible stack traces, and you can resource limit those threads as well so that one runaway request doesn't soak up all of your CPU. Put simply there's a hard separation between requests when you run them on OS threads that is not quite as hard when you shred all of their processing into tiny pieces and mix them into one asynchronous IO work pool.

Email submission to your outbound gateway certainly shouldn't be taking a long time. Heavy queries on the database backend, well, the client is blocked anyway, and you have to store your request's execution state somewhere in the meantime, whether that's in the request thread's stack or in continuation ravioli on the heap. If you have a tunable thread pool for your application server instances then it also acts as a natural limit for the number of concurrent requests your application has in flight; if your backend is backlogging then your thread pool will naturally fill up and back-pressure the clients as well. You would have to explicitly implement a concurrent request rate limiting scheme in an async/await application.

I'm only really seeing upsides for synchronous thread pools versus a continuation-passing free-for-all.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
i reckon that guy probably fires off one db request, waits for it to complete, then fires off a second db request, then waits for it to complete, tying up a thread the whole time

and when clients complain that the site is slow he just shrugs and suggests that there's nothing that can possibly be done about it, that's just how computers work

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
this thread really is for terrible programmers jesus christ

TwoDice
Feb 11, 2005
Not one, two.
Grimey Drawer
call me when your OS is happy with 100k runnable threads

zokie
Feb 13, 2006

Out of many, Sweden
no matter if you are blocking in your request handlers or not you can end up starving the application of threads, but if you are blocking when you don’t need to then you are wasting a lot of time that could be spent handling more requests. quite recently I was involved in trying to fix performance problems in an old asp.net app were they were blocking a lot and according to perfview the time spent was over 50% blocked, followed by actual cpu and then network.

one “solution” was for us calling this system to rate limit ourselves, because they couldn’t handle close to as many simultaneous users…

Zlodo
Nov 25, 2006
isn't "async/await" just a needlessly complicated way to say "coroutine"?

coroutines own and I wish we had access to c++20 coroutines in our codebase

like the "please, entity, load all your resources and poo poo" stuff, which may involve spawning other entities and waiting for them to finish their own initializations and it's done as a method called every frame that returns a bool to indicate if it's finished and the code inside is usually either some horrible bespoke state machine or doesn't bother and do everything in a single frame and dumps a big turd in the profiler

there are some monstrosities like that that I am going to have to split up and I wish I could just insert some co_yield (lol) to make it stop and come back later without rewriting half of this poo poo

hobbesmaster
Jan 28, 2008

TwoDice posted:

call me when your OS is happy with 100k runnable threads

echo 100000 > /proc/sys/kernel/threads-max

hobbesmaster
Jan 28, 2008

probably need to increase vm max map count too for it to be "happy"

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.

Zlodo posted:

isn't "async/await" just a needlessly complicated way to say "coroutine"?

sometimes? that's part of the fun, figuring out what differences are hidden by the similar syntax when someone means by async/await

like I assume javascript promises are isomorphic to coroutines but they don't feel that similar to me when using them

TwoDice
Feb 11, 2005
Not one, two.
Grimey Drawer

hobbesmaster posted:

probably need to increase vm max map count too for it to be "happy"

it's probably gonna be sad either way

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

NihilCredo posted:

you're the second guy that says that async should stay in ui land and backend code should be synchronous and it baffles me

like, don't you ever send emails from your backend (with or without an email service)? you don't upload files? you don't run complex queries that take a second or more from your dedicated database server? do you just buy the biggest 512-core server your employer's money can buy and let half of those threads spin while waiting for the external service to complete the request?

no, i said non-ui code in swift should just be synchronous. that is a very different thing from saying that backend code should be synchronous because no one actually writes swift on the server. your ios app is not going to have so many pending network requests that having a blocked thread for each one is actually a problem, and if you somehow do then it's a sign you should make a less lovely API for talking to your server. there's all this poo poo that's been developed because it's what you have to do to handle 100k incoming requests that just doesn't make any sense when you have zero incoming requests and average well under one outgoing request per second.

Plorkyeran fucked around with this message at 16:41 on Oct 18, 2021

PIZZA.BAT
Nov 12, 2016


:cheers:


DoomTrainPhD posted:

As you should. It’s far better than Java.

lol

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

PIZZA.BAT
Nov 12, 2016


:cheers:


python is fine for certain use cases but lol at calling it "better" than java

12 rats tied together
Sep 7, 2006

theyre about the same bad, imo. python gets better as the ratio for developer price:computer price increases

Share Bear
Apr 27, 2004

like anything there are pros and cons and being aware of them will let you use either effectively

feel like i say some variation of this once a month given i work with both java and python all the time

Share Bear fucked around with this message at 17:18 on Oct 18, 2021

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

PIZZA.BAT posted:

python is fine for certain use cases but lol at calling it "better" than java
as discussed in this thread, python is certainly better at having global interpreter locks and poo poo performance

to be clear, this isn't necessarily bad for some use cases like throwing a quick script together

but anything beyond that, i just don't get it. I mean people also write web servers in bash I guess, that's a bad idea too

12 rats tied together
Sep 7, 2006

fwiw python only holds the gil when it is executing bytecode and releases it during any type of i/o. turns out most of running a webserver is waiting on i/o

you can also release it yourself if you want and plenty of projects do this

Sapozhnik
Jan 2, 2005

Nap Ghost
There's apparently a new proposal to remove the GIL from CPython going around. It's slightly pregnant, uh, I mean, slightly ABI-breaking, but this is Python land where nobody even cares about API stability, never mind ABI stability.

I'm paraphrasing from the LWN article but supposedly the main problem is the increased overhead of atomic reference count increment/decrement ops in the absence of a GIL, so there are some optimizations to do with splitting the reference counts and not reference counting stuff like the True and False singletons.

Hopefully Guido doesn't get up in this guy's face in the middle of a presentation about it this time.

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

12 rats tied together posted:

fwiw python only holds the gil when it is executing bytecode and releases it during any type of i/o. turns out most of running a webserver is waiting on i/o
ok so...why go for python if it's being sold as "ok when mostly doing I/o, lovely locking if you have any business logic"

GenJoe
Sep 15, 2010


Rehabilitated?


That's just a bullshit word.
the whole impetus for async/await (or non-synchronous call-stacks in general) on the backend is that there are very real limits with the # of threads an OS can realistically handle. the one thread-per-request model is just super inefficient for high throughput stuff and is a very 2000's mindset

hobbesmaster
Jan 28, 2008

because if you know python your odds of conning yourself into a $$$$$ machine learning gig increase!

Hed
Mar 31, 2004

Fun Shoe

Sapozhnik posted:

There's apparently a new proposal to remove the GIL from CPython going around. It's slightly pregnant, uh, I mean, slightly ABI-breaking, but this is Python land where nobody even cares about API stability, never mind ABI stability.

I'm paraphrasing from the LWN article but supposedly the main problem is the increased overhead of atomic reference count increment/decrement ops in the absence of a GIL, so there are some optimizations to do with splitting the reference counts and not reference counting stuff like the True and False singletons.

Hopefully Guido doesn't get up in this guy's face in the middle of a presentation about it this time.

it's going to be amazing when packages break because subtle logic errors that were guarded by the GIL become exposed if you run in --gil-free mode or whatever

Share Bear
Apr 27, 2004

Sagacity posted:

ok so...why go for python if it's being sold as "ok when mostly doing I/o, lovely locking if you have any business logic"

what have you made where youve run up against python performance problems?

i made a distributed syndication process that fires data via http to a ton of my place of works partners and other internal services that couldnt be hooked up to kafka

i got stuck on single threaded business logic so i used coroutines and subprocesses to do so more effectively, the bigger bottleneck for it is network bandwidth now

or is this a “i wouldnt use python to begin with cause its slow” comment

works4me

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Share Bear posted:

or is this a “i wouldnt use python to begin with cause its slow” comment
mostly this. why choose to use a language that has all sorts of caveats when there are perfectly decent other languages available where you wouldn't run into any issues in the first place. genuinely curious

OTOH works4me is a very strong endorsement and it's fair enough if it does!

Share Bear
Apr 27, 2004

familiarity and momentum is important, and whatever that may be for you is fine, generally

developer time is more expensive than computer time after all. i can crank out terrible proofs of concept/mvps in this lang pretty well, and they also generally work for the loads theyre given without tweaking or forking a ton of child procs or whatever


it only matters when it matters, and that's not as often as you'd think. or maybe i only work on small potatoes?

12 rats tied together
Sep 7, 2006

the real answer to this question is simple - if you ask someone at any of the big tech places these days that started, or are currently using, a dynamic language for their platform they're going to tell you that facebook/instagram/github picked php/python/ruby because they knew those tools. nobody picks a language in a vacuum

to give you less of a fake non-answer, the "standard" web service deployment these days minimizes time spent executing in-process business logic. as a code toucher you had to learn that order of magnitude chart for seek times right? cache hit vs l1 cache vs memory read, and so on? a networked application by definition spends time waiting on the network. time spent waiting on the network dominates most forms of efficiency gains you would realize by either choosing a different language or by tuning your python setup.

there are also tons of techniques around compensating for the gil. most python web servers fork into a bunch of workers that can handle some number of requests based on their type. since handling an http request is a concurrency (not parallelism) problem, the best one of these tends to be based on greenlets which solves for arbitrarily high requests-per-worker, you can tune your web servers to run arbitrarily high workers-per-server, and then your load balancer to run arbitrarily high servers-per-endpoint.

since it's so trivial to wait on network i/o with greenlets you also often see people shoving values into network caches, message buses or event stores, or minimizing CPU time locally by letting e.g. the postgresql query planner do as much of the hard work as is possible.

for needs that are primarily focused on local access to cpu/disk/memory, you usually see modules made available as python bindings for an os-native language. scipy is C/Fortran, numpy is c/c++, cuda is cython which is kind of a hybrid python/c language, and so on.

as a systems janitor who has janitored many a framework and language, my experience is that cost per memory tends to be the controlling factor in scaling python past whatever chokepoint the company is going to encounter next, it's never been like, the time it takes to copy stuff out of the http request into a class instance.

distortion park
Apr 25, 2011


We run a single node process per instance and it works fine from a throughput perspective normally. Like I'd much rather be running dotnet core (we tested and the perf was better for a sample use case) but it isn't a major issue for us.

Except if we gently caress up two things:
* Autoscaling. We run on two vcpu instances but someone set the autoscaling threshold at 60%, so it never fired
* If you gently caress up and do deploy slightly slow code the event loop rapidly fills and you get less margin than if you could use all the vcpus. I feel like this interacts badly with load balancing in some way but am not smart enough to figure it out.

Maybe we'd be better off running on 1 vcpu instances but we were having performance issues with them (I think due to excessive context switching with secondary services idk).

N.Z.'s Champion
Jun 8, 2003

Yam Slacker

matti
Mar 31, 2019

xkcd is so bad even when it is an edit that i think you should not post them

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
goatkcd or nothing imo

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat

Jabor posted:

goatkcd or nothing imo

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
there used to be a forum that had a sticky thread "post goatkcd and you're banned" stickied for years

it was great

CRIP EATIN BREAD
Jun 24, 2002

Hey stop worrying bout my acting bitch, and worry about your WACK ass music. In the mean time... Eat a hot bowl of Dicks! Ice T



Soiled Meat
oh it was the official xkcd forum lol

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

CRIP EATIN BREAD posted:

oh it was the official xkcd forum lol

who tf posts on the xkcd official forums

Adbot
ADBOT LOVES YOU

hobbesmaster
Jan 28, 2008

losers, nerds me 15 years ago

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply