|
it's probably just broken down by sku
|
# ? Nov 24, 2017 22:25 |
|
|
# ? May 4, 2024 08:36 |
|
poo poo, gamers got cash. I guess I should just be thankful they are subsidizing my FLOPS. It's too bad that so many people think GPU compute == Bitcoin hashes. Hashing is a really boring/trivial parallel problem, and it's actually really fun to do complex data parallel programming.
|
# ? Nov 24, 2017 23:19 |
|
we do that in games too, we just have different constraints http://advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf
|
# ? Nov 24, 2017 23:31 |
|
LinYutang posted:are people really trying to do compute on phones though only goofus is gallant is developing and training his models on a real compute cluster, and just deploying pre-baked and optimized models on phones (CoreML is cool)
|
# ? Nov 25, 2017 00:13 |
|
i'm still learning cuda, and last week i hit an issue making the cuda driver api and the cuda runtime api play nice together. cudart will take the current thread's context as its own if you grab one before you start making cudart calls, so i did that. and cudart started throwing incompatibility errors. turns out on older cards, the context versions are too low to support cudart. so you have to use the card's primary context instead. i'm assuming every card only has one primary context and it can't be shared between cpu threads since cuda's docs say you need to return it when done, so is this why people have trouble running more than one program per gpgpu? do these problems exist in opencl?
|
# ? Nov 25, 2017 13:24 |
|
OpenCL backends may suck to various degrees, but they're generally better at multithreading/state management. This is because everything is explicit and very verbose in the OpenCL API. This sucks a bit (a lot) when you're just writing code yourself, but is quite nice when you're generating code, or writing a library, because there will be no hidden state. I don't know if older NVIDIA cards have weird rules that say you can only have one OpenCL context at a time, but I don't think so. I have certainly been running with dozens of active OpenCL contexts on a GTX780 GPU. You may need to create a distinct command queue for every thread, but that's no big deal - it mostly means that synchronisation is your own out-of-band problem.
|
# ? Nov 25, 2017 13:39 |
|
what I need to know is when will vrml be back
|
# ? Nov 25, 2017 15:03 |
|
|
# ? May 4, 2024 08:36 |
|
it’s back already but it’s implemented entirely in JavaScript and requires several GB of RAM just to launch
|
# ? Nov 25, 2017 17:53 |