|
this talk of apis and stuff makes me wonder if someone would ever make a wddmwrapper in the spirit of ndiswrapper. reactos doesn't even support wddm at the moment so maybe that's a bit too far out.
|
# ? Oct 22, 2017 16:58 |
|
|
# ? Jun 9, 2024 04:35 |
|
crazysim posted:reactos *ears perk up*
|
# ? Oct 23, 2017 17:00 |
|
echinopsis posted:i can remember space moose from like 2004 when I joined noooo, I think I started the space moose av back in 2008 or 2009 or so. in 2004 I was still av-less
|
# ? Oct 23, 2017 21:02 |
|
last decade wow
|
# ? Oct 24, 2017 01:01 |
|
Malcolm XML posted:yeah nvidia is not gonna give away the detailed hardware specs needed for this unless AMD really magically starts competing in GPUs i wasn't suggesting they would publish specs or beg open source authors to work on poo poo just that nvidia would maintain an in-tree kernel driver instead of an out-of-tree
|
# ? Oct 24, 2017 01:04 |
|
given that nvidia has helped out the nouveau project with doc dumps, from time to time, suggests to me that they are not so much hostile to open source as focused on the bottom line the nvidia linux driver makes money for nvidia in its current state, as a bunch of closed source blobs with a lovely little open source shim if the future path of least resistance is a real open source kernel driver, pushing the blobs into user space, i would bet on nvidia following that path
|
# ? Oct 24, 2017 01:05 |
|
whenever a machine intelligence takes over the world its prob gonna be running on nvidia hardware
|
# ? Oct 24, 2017 02:43 |
|
do i win hottest take?
|
# ? Oct 24, 2017 02:43 |
|
look i know they play it up a lot for marketing, but "machine learning" isn't really all that close to machine intelligence.
|
# ? Oct 24, 2017 02:50 |
|
apparently i wont effortpost again because you guys are more interested in linux
|
# ? Oct 24, 2017 03:43 |
|
Suspicious Dish posted:apparently i wont effortpost again because you guys are more interested in linux YOSPOS, bithc.
|
# ? Oct 24, 2017 03:43 |
|
josh04 posted:look i know they play it up a lot for marketing, but "machine learning" isn't really all that close to machine intelligence. they can only currently beat the top human's in the world at specific complex tasks, like playing go, or some RTS game. as far as generality, they are doing ok at learning to interpret raw pixel input from games the problem happens when some computer scientist decides to train a nn to be a computer scientist/neural network design expert. teach it concepts of logic, proofs, algorithms, and eventually how its own brain works. *then* we're hosed
|
# ? Oct 24, 2017 04:39 |
|
Suspicious Dish posted:apparently i wont effortpost again because you guys are more interested in linux i appreciate your effortposts and hope you continue, friend
|
# ? Oct 24, 2017 04:47 |
|
peepsalot posted:the problem happens when some computer scientist decides to train a nn to be a computer scientist/neural network design expert. google's literally already doing that and they saw increases in performance of *gasp* a whole 20%!! turns out there's a ton of saddle points and local minima that neural networks love to get stuck in and the whole "exponentially self-improving AI" thing is entirely fiction
|
# ? Oct 24, 2017 04:48 |
|
ate all the Oreos posted:google's literally already doing that and they saw increases in performance of *gasp* a whole 20%!!
|
# ? Oct 24, 2017 04:54 |
|
basically if our brains evolved from a few billion of years of natural selection and dumb luck, how many clock cycles (at a few billion per second currently) will it be before AI can beat us at the game of life?
|
# ? Oct 24, 2017 05:12 |
|
computers are deterministic. there is no truly random evolution
|
# ? Oct 24, 2017 05:15 |
|
peepsalot posted:basically if our brains evolved from a few billion of years of natural selection and dumb luck, humans will probably get out-competed and driven to extinction by corporations before computers become a threat
|
# ? Oct 24, 2017 05:33 |
|
Cocoa Crispies posted:humans will probably get out-competed and driven to extinction by corporations before computers become a threat
|
# ? Oct 24, 2017 05:35 |
|
a google ai literally shoving targeted content down your throat until you suffocate
|
# ? Oct 24, 2017 05:38 |
|
peepsalot posted:whenever a machine intelligence takes over the world its prob gonna be running on nvidia hardware last I heard Nvidia hardware wasn’t all that great for running Lisp, and the only real contender for anything resembling “machine intelligence” is written in Lisp
|
# ? Oct 24, 2017 05:39 |
|
Suspicious Dish posted:apparently i wont effortpost again because you guys are more interested in linux boo Suspicious Dish make the effortpost
|
# ? Oct 24, 2017 05:40 |
|
peepsalot posted:as far as generality, they are doing ok at learning to interpret raw pixel input from games Eurisko/Cyc hasn’t hosed us yet despite understanding how its own brain works and being able to self-modify
|
# ? Oct 24, 2017 05:44 |
|
eschaton posted:Eurisko/Cyc hasn’t hosed us yet despite understanding how its own brain works and being able to self-modify
|
# ? Oct 24, 2017 05:50 |
|
NEED MORE MILK posted:computers are deterministic e: also deterministic doesn't preclude chaotic Suspicious Dish sorry i poo poo up the thread with my doomsaying, your smart posts are good too thanks and I read them peepsalot fucked around with this message at 07:02 on Oct 24, 2017 |
# ? Oct 24, 2017 06:16 |
|
nobody seemed to care about quad occupancy so i won't post anymore about that... in the meantime have a really good article from 2013 about warp branch behavior and divergence https://tangentvector.wordpress.com/2013/04/12/a-digression-on-divergence/
|
# ? Oct 24, 2017 07:41 |
|
Suspicious Dish posted:nobody seemed to care about quad occupancy i did i just didn't have anything to say also i think i owe you an effortpost but haven't been able to here in a while
|
# ? Oct 24, 2017 07:59 |
|
NEED MORE MILK posted:computers are deterministic. there is no truly random evolution <---- is reminding you that if you have any deece intel cpu made since ~2012 (i think that's the right year) you have a true random number generator built into the cpu which shits out high quality (*) entropy at rates on the order of hundreds of megabytes per second <---- has designed a true random number generator for a former employer (not intel) and can tell you that it is challenging to do well but quite doable * unless you believe the nerds who don't trust RdRand because they think the got to intel, personally i doubt that happened but have no way to prove it
|
# ? Oct 24, 2017 08:12 |
|
ate all the Oreos posted:i'm seriously asking, like is "ordered information" some kind of potential energy since it's not as low an energy state as random / higher entropy states could be? yes in order to not be random, you need to expend energy to change the bits. 1 bit is 1 unit of 'Shannon Entropy', which has been proved to be the same as the entropy we are familiar with in thermodynamics, but it's like the quantum-scale unit. the theory goes something like this... for a given algorithm, you expect a certain output (in x bits). the more precise you need the answer, the less combinations of output bits are allowed, the lower the entropy, and the more work it takes to generate (why approximations run faster). there will be a theoretical lower limit of energy required, which varies depending on the algorithm. take the simple case of generating the square root of a number. a double precision root op is expensive, 32bit is less expensive, and on a cpu we can use some bit twiddling to get a close approximation with fewer instructions. the fastest method of all (depending on memory transaction costs) would be a look up table, but this requires spending memory (fixed entropy) to save runtime energy - the memory is literally a store of pre-generated entropy that can be used. it has to be filled by doing the algorithm on all inputs (spent energy). a typical optimisation would be to only store 1% of the inputs. if we use it as-is, there will be some lower bits that are 'wrong'. this can be improved with interpolation - but that obviously is just changing the balance of 90% less stored entropy in exchange for the extra operations (energy spent) to interpolate between two entries each time the algorithm is used. the trick for optimisation is balancing the costs of work done (cycles on the processor) with static entropy (memory) to get the desired result within a desired margin of error. this also means that theoretically a formatted hard disc weighs slightly more than an unformatted one by a comedy 'weighs less than an atom' amount, as energy is spent on changing the state, and it turns out to be true - a 1TB drive can contains something like 5J worth of extra energy in the potential energy of the magnetic dipoles. the dipoles want to be randomly aligned, alternating n/s/n/s or lined up in rings & random swirly fractal shapes, but when formatted the area of each bit needs the dipoles aligned together. if you could attach little pulleys to the dipoles and let em go, they would swing around to the random shape and the energy could be 'extracted'. chip based memory will have similar properties - in order to store a 0 or a 1 the states of the atoms need to be put in an unnatural state and will want to decay back, the difference in energy states is the potential energy. solid state memory has a larger energy barrier before they can flip which allows them to remember stuff when unpowered (thermal fluctuations in the atoms energy are not enough to push it over the energy hump) which means they can preserve their entropy for longer. but by definition this means the energy cost of using them is higher - no free lunches with thermodynamics. note that most of the energy costs are due to the massively inefficient systems we use. single atom scale devices would use less than a billionth of what we use now, but the same principles would apply. to make any pattern of bits/information will require changing states of something, and the states must be different energy levels of some kind. changing state will require energy, and this energy must be larger than 'thermal noise' else the information will decay. the energy difference between these states will be the potential energy stored per bit, and by e=mc^2, it will have extra mass to be formatted.
|
# ? Oct 24, 2017 10:33 |
|
Suspicious Dish posted:nobody seemed to care about quad occupancy so i won't post anymore about that... in the meantime have a really good article from 2013 about warp branch behavior and divergence im all about the occupancy iykwim
|
# ? Oct 24, 2017 11:36 |
|
Suspicious Dish posted:let's talk about triangles. everybody loves triangles. gpus love triangles: they're guaranteed planar shapes with very few edge cases (the big one is "degenerates" -- triangles which have no area). they're also *convex*, so you can determine if a point is inside them with three simple edge tests. triangles make everyone's life easier, everyone loves triangles. isnt this due to the shared nature of gpu hardware? like it's required b/c gpu compute units amortize the expensive bits over many threads to allow for massive throughput
|
# ? Oct 24, 2017 11:38 |
|
suspicious dish post the blog
|
# ? Oct 24, 2017 11:45 |
|
BobHoward posted:
how is it that difficult u just put a few reverse biased transistors and whiten the bits using von Neumann's trick genuiniely interested
|
# ? Oct 24, 2017 11:47 |
|
Malcolm XML posted:how is it that difficult u just put a few reverse biased transistors and whiten the bits using von Neumann's trick i assume it's challenging to do it in a way that's like, provably reliable and also fast? even with key whitening if the reverse-biased transistors get out of whack you could wind up with it drifting to the point where it's spitting out all-ones (and thus not emitting anything) or something like mine did e: i think the "real" ones use avalanche diodes or some other mechanism that's a bit more predictable for this reason actually
|
# ? Oct 24, 2017 14:33 |
|
Malcolm XML posted:how is it that difficult u just put a few reverse biased transistors and whiten the bits using von Neumann's trick von neumann's trick doesn't work if the bits are intercorrelated
|
# ? Oct 24, 2017 17:44 |
|
vOv posted:von neumann's trick doesn't work if the bits are intercorrelated SEE VON NEUMANN'S ONE WEIRD TRICK! STATISTICIANS HATE HIM BUT THERE'S NOTHING THEY CAN DO!
|
# ? Oct 24, 2017 20:46 |
|
OzyMandrill posted:physicist in the house! (tho it was many years ago and i've been touching computers ever since) dang, this is very interesting
|
# ? Oct 24, 2017 22:56 |
|
Malcolm XML posted:how is it that difficult u just put a few reverse biased transistors and whiten the bits using von Neumann's trick what the Oreo eater said, and in my case i was asked to build a pure digital circuit that was reasonably process neutral. that pushes you towards some form of ring oscillator and those are tricky in many ways and also difficult to get high performance out of intel published rather a lot about theirs and you can google up a lot about how it works. iirc it’s basically a circuit for forcing a flop into metastability and then letting it resolve to 0 or 1, with analog tuning and mixed signal feedback so it can make itself give a roughly fair coin flip output pre whitening. then for high quality whitening and rate enhancement they use true random bits to seed an aes stream cipher block. (as noted von Neumann assumes all samples are uncorrelated, which is difficult to prove, also von Neumann extractors aren’t terribly efficient unless you do a more obscure generalized version that isn’t taught in textbooks) this is becoming a derail
|
# ? Oct 24, 2017 23:08 |
|
Suspicious Dish posted:nobody seemed to care about quad occupancy so i won't post anymore about that... in the meantime have a really good article from 2013 about warp branch behavior and divergence oh nooo, I love this stuff like, partial derivatives within a 2x2 pixel block, what’s going to happen if some of those pixels diverge and try to calculate some other partial derivative of some other function? I mean I know that the result is going to be undefined, but what is the hardware actually going to be doing?
|
# ? Oct 25, 2017 00:44 |
|
|
# ? Jun 9, 2024 04:35 |
|
BobHoward posted:this is becoming a derail but a very interesting one, thank you for these posts
|
# ? Oct 25, 2017 19:43 |