|
cooperative is really good, sometimes
|
# ? Jul 28, 2017 20:30 |
|
|
# ? May 22, 2024 14:57 |
|
Bloody posted:cooperative is really good, sometimes go on, i trust you
|
# ? Jul 28, 2017 21:30 |
|
ynohtna posted:go on, i trust you great, now we're never getting the thread back
|
# ? Jul 28, 2017 21:49 |
|
async/await fuckin owns, it let me replace a very complex http request handling system with something much smaller and cleaner though i don't know what it's doing 100%, it's good magic
|
# ? Jul 28, 2017 22:52 |
|
JawnV6 posted:great, now we're never getting the thread back
|
# ? Jul 29, 2017 03:19 |
|
whew im finished now hope i didnt gently caress anything up
|
# ? Jul 29, 2017 04:18 |
|
but i thought you were waiting for me to finish?
|
# ? Jul 29, 2017 05:11 |
|
Sapozhnik posted:but i thought you were waiting for me to finish? sounds like youre done, so For twelve years, you have been asking: Who is Lutha Mahtin? This is Lutha Mahtin speaking. I am the thread who loves his life. I am the thread who does not sacrifice his love or his values. I am the thread who has deprived you of CPU time and thus has destroyed your world, and if you wish to know why you will never terminate--
|
# ? Jul 31, 2017 04:29 |
|
dart??
|
# ? Jul 31, 2017 12:49 |
|
https://twitter.com/thorstenball/status/891696891414663168
|
# ? Aug 4, 2017 06:05 |
|
pretty interesting write up on the state of concurrency models in modern programming languages: https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#file-taskconcurrencymanifesto-md
|
# ? Aug 18, 2017 23:29 |
|
MALE SHOEGAZE posted:pretty interesting write up on the state of concurrency models in modern programming languages: i hope sometime we can have real threads again instead of everyone doing cooperative multitasking in all code and never block or sleep because someone needs to serve a billion idle clients on a raspberry pi like i don't care if they're os threads or green threads or whatever but maybe if you're serving less than a request per second you should just write a bunch of blocking statements and have the computer execute them in order go got that right, too bad the language is terrible for writting concurrent code
|
# ? Aug 20, 2017 19:10 |
|
If you're serving less than one request per second it's probably easier to hire someone to do it by hand
|
# ? Aug 20, 2017 19:15 |
|
suffix posted:i hope sometime we can have real threads again instead of everyone doing cooperative multitasking in all code and never block or sleep because someone needs to serve a billion idle clients on a raspberry pi nah, async/await is the right thing here. it's basically just writing blocking code but asking you to do some minimal acknowledgement of that fact
|
# ? Aug 20, 2017 19:52 |
|
you can always put your await inside a thread as well if you like! just make sure it's got some sort of pool to return to while suspended
|
# ? Aug 20, 2017 20:09 |
|
I've had some nice success mixing threads and async. that is my story
|
# ? Aug 20, 2017 20:15 |
|
I think I've said before I think async/await is a real breakthrough in PL design. As important as introducing lambdas to mainstream languages. Probably the biggest thing we need at this point, though, is better OS support. Last I checked, Linux async IO stuff is... disappointing. I don't think anything can actually be async without having threads in the background that get recruited to go do blocking syscalls. I'm not an expert in windows stuff, but I think this is one of the unusual areas where windows is legit a lot better. At least the right APIs exist. Someday, we'll have applications that open up all the files they need to load and read them concurrently, instead of one at a time. Even "multithreaded" loading only uses a small number of threads, like how many cores you have, when an NVMe SSD can eat tens of thousands of concurrent reads for breakfast.
|
# ? Aug 20, 2017 21:09 |
|
MALE SHOEGAZE posted:pretty interesting write up on the state of concurrency models in modern programming languages: The "crazy and and brilliant" section is weird. I'm not really sure asynchronous operations address the primary difficulty in programming DSPs and GPUs.
|
# ? Aug 20, 2017 21:22 |
|
crazypenguin posted:Linux is... disappointing.
|
# ? Aug 20, 2017 21:24 |
|
crazypenguin posted:I think I've said before I think async/await is a real breakthrough in PL design. As important as introducing lambdas to mainstream languages. Yeah, the Windows I/0 stack and APIs are fully async, Linux is all blocking thread garbage from the 1970s. Linux AIO is a bastard step-child that doesn't actually work on things like file or network I/0. And Linux non-blocking I/O also doesn't work on files, has a mostly unusable wait primitive (epoll), and that's ignoring how non-blocking is rear end-backwards compared to async.
|
# ? Aug 20, 2017 21:36 |
|
crazypenguin posted:I think I've said before I think async/await is a real breakthrough in PL design. As important as introducing lambdas to mainstream languages. You can do true asynchronous file IO (not network IO) under Linux. The native api kind of looks like POSIX AIO but it isn't; this might be because POSIX AIO harvests events using real time signals which is a bit icky. I have no idea how event harvesting on the native API works. This comes with caveats though: you have to open a file with O_DIRECT and you have to perform page aligned accesses. Pretty much what you're doing is you're bypassing the kernel block cache and submitting work to the block layer directly, with all the low-levelness that entails. The other alternative, if you don't care about latency, is to mmap() the things you want to read/write and then let page faults handle everything else. The advantage here is that you're acting directly on the block cache; if you use traditional read/write calls then you will fault pages in/out of the block cache and then do a kernel-to-userspace memcpy from said block cache. The blocks have to get loaded into the cache anyway, so you may as well use that backing store directly from your application instead of keeping redundant copies of your data in both user and kernel space. But that's not async IO related as such.
|
# ? Aug 20, 2017 21:42 |
|
Async IO is usually the wrong way to go for network applications. If you're doing something really high-end like a C10K server then the last thing you want is to have separate receive buffers for every single open and idle connection. epoll is in fact exactly what you want there; wake up when a thing happens, use a single buffer to recv() the thing into. Zero-copy networking isn't really a thing that can be done in a generic way from user space. You'd need to somehow packetize your messages from user space with knowledge of each connection's link MTU, then the kernel needs to weave the resulting scatter/gather list in with the relevant packet headers for all of the Ethernet, IP, TCP etc layers on top of that. And then the NIC needs to DMA all that poo poo out from all of those memory pools and then the kernel would need to release all of the buffers once that's done. Any speedup you get would drown in the bureaucracy. There are specialized accelerator cards that can do this sort of thing (Solarflare, I think?), but they have to be exclusively owned by a single user space process, you can only really solve the problems above by cutting the kernel out of the loop altogether and mapping the IO address space directly into the controlling process.
|
# ? Aug 20, 2017 21:50 |
|
as a mere pleb this has me wondering i never really considered much how async works under the hood and it never occurred to me that the OS would be much involved at all. soo...what exactly is happening when i do async?
|
# ? Aug 20, 2017 21:59 |
|
Thermopyle posted:as a mere pleb this has me wondering depends on the specific thing you're doing. here's writing some data in .net on windows.
|
# ? Aug 20, 2017 22:26 |
|
The Windows I/O stack is packet-based, things like ReadFile get turned into a read packet that gets submitted to the filesystem driver which turns around and submits another I/O packet to the volume manager which submits I/O packets to the disk controller, etc. (sharing the same data buffer). Eventually the driver at the bottom of the stack submits a command to a hardwarecommand queue, later gets a hardware interrupt announcing the I/O completion, and notifies whatever is immediately above it in the I/O stack, which percolates back up to the top. There are multiple ways the kernel can notify userspace that the I/O operation has completed -- the simplest is for "synchronous" operations where the userspace function submits an async I/O and then immediately waits for the file handle to be signalled by the kernel. Alternately, you can: * issue "loose" async operations and wait on the event objects you supplied in the OVERLAPPED struct via e.g. WaitForMultipleObjects() or poll for completion with GetOverlappedResult() * associate the file handle with an IO Completion Port, submit a bunch of async operations, and then dequeue result packets from the IOCP as the operations complete * supply a Asynchronous Procedure Call function when you issue the async operation, when the async operation completes, the kernel will add an APC call to the thread's queue and when the thread enters an alertable state (only in the WaitForObject functions), it'll dequeue and call the APCs (this is a less idiotic version of Unix signals) async/await functions are written as straight-line code by the developer, but get turned into state machines by the compiler. They take a hidden argument that describes their current state in the state machine. When you await something, the function issues an async I/O operation that will notify the event loop on completion and then returns back to the GUI or server event loop. The async I/O operation carries some user-supplied data describing the async/await function and its current state in the state machine. When the event loop thread gets a completion notification for the async I/O, it calls back into the async/await function again, supplying the current state that was saved away in the IO completion notification. The async/await function then starts where it left off previously. pseudorandom name fucked around with this message at 00:01 on Aug 21, 2017 |
# ? Aug 20, 2017 22:54 |
|
informative post ^^ (also good read guy who linked me to that blog) whats the history of this? did MS come up with this idea on their own and then everyone else say "hey Microsoft did some cool poo poo here!" and then copy the idea from them?
|
# ? Aug 20, 2017 23:41 |
|
I'm not 100% on the history, but Microsoft probably deserves some (see edit) credit, especially for taking a leap and giving a seriously good implementation of it in a mainstream language like C#. But async IO has been arond a long time, and async/await is pretty much "what if we did async, but with monads?" e: should probably say more than just "some". They proved it was a good idea, really. crazypenguin fucked around with this message at 23:57 on Aug 20, 2017 |
# ? Aug 20, 2017 23:53 |
|
The Windows IO stack (and much of the rest of the kernel design) comes from VMS and RSX-11 by way of Dave Cutler. async/await is basically a simplified version of continuations that mere mortals can use, or a more generalized form of generators. They're not exactly new or innovative, but Microsoft/Anders Hejlsberg deserve the credit for designing something usable and shoving it into the mainstream.
|
# ? Aug 21, 2017 00:08 |
|
Sapozhnik posted:Async IO is usually the wrong way to go for network applications. If you're doing something really high-end like a C10K server then the last thing you want is to have separate receive buffers for every single open and idle connection. epoll is in fact exactly what you want there; wake up when a thing happens, use a single buffer to recv() the thing into. wait, so are you distinguishing async i/o in the way that e.g. windows does it from epoll? I'm not very familiar with what async really means on the OS level, so apologies if that's a dumb question.
|
# ? Aug 21, 2017 00:28 |
|
pseudorandom name posted:The Windows I/O stack is packet-based, things like ReadFile get turned into a read packet that gets submitted to the filesystem driver which turns around and submits another I/O packet to the volume manager which submits I/O packets to the disk controller, etc. (sharing the same data buffer). Eventually the driver at the bottom of the stack submits a command to a hardwarecommand queue, later gets a hardware interrupt announcing the I/O completion, and notifies whatever is immediately above it in the I/O stack, which percolates back up to the top. i have absolutely no idea how OVERLAPPED works
|
# ? Aug 21, 2017 00:31 |
|
O_DIRECT more like O_RACLE
|
# ? Aug 21, 2017 00:33 |
|
I mean there's two things being talked about here which have the same name but mean totally different things. "async/await" is purely a language feature for transforming a program in a way that makes io code easier to write. Even then there's two approaches that are similar but not quite identical: one based around adding syntactic sugar for something that would otherwise be purely a promises code library (C#, JavaScript) and another that is a minor tweak to the coroutines language feature (Python). "async io" is a system call interface that lets programs initiate io and then go do something else while it completes. Not to be confused with event polling, which is based around synchronous io, and is a system call interface that causes your program to wait until one or more io channels would not block if you were to perform synchronous io on them.
|
# ? Aug 21, 2017 00:36 |
|
ok-- I guess I also see people use async i/o to mean "I have one thread servicing many sockets/files instead of 1:1" e.g. with epoll. I'm particularly thinking of netty as that's what I've been tinkering with a lot lately.
|
# ? Aug 21, 2017 00:40 |
|
async I/O is specifically telling the kernel "go do a thing and let me know when you're done", where "thing" packages up e.g. a file handle, an offset, a size, memory buffer, and an operation and the kernel goes off and reads data from disk or sends a TCP packet or something polled I/O is asking the kernel "if I attempt to do a thing right now, would I block?" where "thing" is read some data or write some data and the kernel can only ever say "maybe", and then you can sometimes combine this with non-blocking I/O where if the kernel was lying when it said maybe, the kernel will probably tell you "whoops I was lying" instead of blocking the big difference is that with async I/O, your single thread can start multiple operations simultaneously and the kernel can e.g. sort disk requests based on an elevator algorithm and return them out-of-order or submit the requests simultaneously to different mirrored RAID devices or round-robin them to different channel bonded NICs or whatever. without async I/O you need one userspace thread per outstanding request, with all the overhead that entails
|
# ? Aug 21, 2017 00:50 |
|
pseudorandom name posted:The Windows IO stack (and much of the rest of the kernel design) comes from VMS and RSX-11 by way of Dave Cutler. i can't wait til anders gets bored of typescript and decides to make a good new language (and not just microsoft java or microsoft javascript)
|
# ? Aug 21, 2017 02:02 |
|
i would be extremely open to microsoft rust
|
# ? Aug 21, 2017 04:18 |
|
Bloody posted:i have absolutely no idea how OVERLAPPED works its kind of dumb -- the conceptual design is good, but the nitty gritty details of how the API actually works are a little bit crap. the file handle, buffer address, buffer size, and APC function pointer are passed directly as parameters to e.g. ReadFileEx, but the file offset and notification event handle are packaged up in an OVERLAPPED struct with some internal data members for no apparent reason. and then you have the CreateIoCompletionPort function, which can create an IOCP, associate a file handle with an existing IOCP, or simultaneously create an IOCP and associate a file handle with it, because it sure makes sense to cram all three things into one function called Create.
|
# ? Aug 21, 2017 05:41 |
|
Bloody posted:i would be extremely open to microsoft rust
|
# ? Aug 21, 2017 07:32 |
|
Gazpacho posted:every language microsoft has adopted after C++ has been a mistake so that would fit c#? typescript?
|
# ? Aug 21, 2017 08:22 |
|
|
# ? May 22, 2024 14:57 |
|
pseudorandom name posted:its kind of dumb -- the conceptual design is good, but the nitty gritty details of how the API actually works are a little bit crap. The synchronous bits go in the function call and the optional asynchronous bits go into a struct. The address of the struct itself is also used to identify the io channel when harvesting events, which is actually really good because you can stuff it into an object and then use containerof to recover that object with a single arithmetic instruction. though at least epoll gives you a pointer-sized context that you can associate with every io op, so, same thing. In addition to IOCPs, Windows also has an extremely lovely select() type thing (max 64 file handles) and USER32 also has an IPCish thing in the form of window messages, which are used for UI operations. You can combine the lovely select thing and waiting for a window message using one function (MsgWaitForObject or somesuch) but you can't combine messages with IOCPs. You'd think you can select() on the IOCP itself, but you can't because NT doesn't work that way; if you try to select() on an IOCP it just returns immediately. Sapozhnik fucked around with this message at 14:31 on Aug 21, 2017 |
# ? Aug 21, 2017 14:28 |