|
So, basically, everything's reducible to a single number representing all its, like, ability points, and if you make the numbers fight the bigger number will always win. Since AIs have the biggest, most made-up numbers, they can do anything!
|
# ? Jan 21, 2015 20:22 |
|
|
# ? Jun 8, 2024 17:14 |
|
SolTerrasa posted:It's crazy, but it's pretty simple. It goes like this. So it's like the groundhog day power.
|
# ? Jan 21, 2015 20:24 |
|
SolTerrasa posted:The second one is pure brute-force attacking every possible conversation. In practice it will almost certainly be a heuristic-guided search, with heuristics like "type 215C humans are susceptible to threats to their families", so the AI will test all conversations including that. Let's say you have a computer the size of the upper bounded size of the visible and non-visible Universe that makes computations at a rate of one per Planck volume Planck-Hertz, which is the fastest possible rate. You use this computer to try to brute-force the game Super Mario Bros, with a goal of winning as fast as possible. I had to make an order of magnitude calculation because the numbers involved are so monstrously stupid, but brute-forcing something as simple as that would take upwards of 1015800 times the age of the Universe. Brute-force computing anything with an exponential solution space is loving dumb as hell.
|
# ? Jan 21, 2015 20:25 |
|
The AI understands minds. Minds are data. Data is electricity. Electricity is vulnerable to SQL injections.
|
# ? Jan 21, 2015 20:26 |
|
I love when a programming problem has to be compared to "All matter in the universe computing until entropic heat death".
|
# ? Jan 21, 2015 20:31 |
|
Pavlov posted:I love when a programming problem has to be compared to "All matter in the universe computing until entropic heat death". And that's for a greatly simplified version of the problem. I'm not sure there's a calculator that can output meaningful results for a more accurate model like 7,000,000,000261000.
|
# ? Jan 21, 2015 20:40 |
|
Chamale posted:Let's say you have a computer the size of the upper bounded size of the visible and non-visible Universe that makes computations at a rate of one per Planck volume Planck-Hertz, which is the fastest possible rate. You use this computer to try to brute-force the game Super Mario Bros, with a goal of winning as fast as possible. I had to make an order of magnitude calculation because the numbers involved are so monstrously stupid, but brute-forcing something as simple as that would take upwards of 1015800 times the age of the Universe. Brute-force computing anything with an exponential solution space is loving dumb as hell. No you see the AI can run it in 3^^^3 universal instances so it will only take a fraction of a second. There's literally no real world math you can use to disprove anything they say because they've invented a fictional set of math rules that justify their bs.
|
# ? Jan 21, 2015 20:41 |
|
Chamale posted:And that's for a greatly simplified version of the problem. I'm not sure there's a calculator that can output meaningful results for a more accurate model like 7,000,000,000261000. I think this can be computed semi-exactly. For every frame of input, there can be 36 meaningful controller states (8 meaningful directions or no direction, A pressed or not, B pressed or not, 9 * 2 * 2 = 36). The fastest known time for Super Mario Bros is 17868 frames, so the AI just has to try every combination shorter than that. Plugging in the values for the volume of the universe, the Planck volume, and Planck time, we see that it would take ≈ 6 * 1027570 years to try them all. Depending on what solutions exist, this could be shorter, since once a solution of a certain length is found, any remaining longer combinations could be ruled out.
|
# ? Jan 21, 2015 21:01 |
|
When they talk about these simulations, do they ever attempt to deal with problems like the butterfly effect, or recursion (the simulation having to contain itself)?
|
# ? Jan 21, 2015 21:02 |
|
Yudkowsky has been presented with the argument "assuming the most efficient possible storage and mutation of data, there is literally not enough space in the universe to do that. Also, there is not enough time, and also, there is not enough energy." His response is "well, not in THIS universe. The AI will invent one, probably with better physics." If you read my really way-too-long writeup of the FOOM debate, the fifth thing he believes will cause an AI to FOOM is literally "magic". He unironically typed the word magic as his expected outcome.
|
# ? Jan 21, 2015 21:39 |
|
Epitope posted:When they talk about these simulations, do they ever attempt to deal with problems like the butterfly effect, or recursion (the simulation having to contain itself)? The first one is called "sensitive dependence on initial conditions", and they solve it with brute force. Which doesn't work (see above re: not enough space, time, or energy in the universe), but they think it does, so that's fine. The second one isn't a real problem because the behavior of the simulated being is the subject of the simulation, so it doesn't actually have to simulate itself.
|
# ? Jan 21, 2015 21:42 |
|
Van Kraken posted:I think this can be computed semi-exactly. For every frame of input, there can be 36 meaningful controller states (8 meaningful directions or no direction, A pressed or not, B pressed or not, 9 * 2 * 2 = 36). The fastest known time for Super Mario Bros is 17868 frames, so the AI just has to try every combination shorter than that. Plugging in the values for the volume of the universe, the Planck volume, and Planck time, we see that it would take ≈ 6 * 1027570 years to try them all. Depending on what solutions exist, this could be shorter, since once a solution of a certain length is found, any remaining longer combinations could be ruled out. By simplified version of the problem, I meant that saying you have the computer tackle Mario is much faster than having it try to simulate all possible text for all possible minds. I assumed the AI is producing messages of 1000 characters in the hopes of mentally destroying someone, so that's where the 7 billion and the 26 come from. I just brought up Mario as an illustration of how if brute-forcing a video game is impossible, brute-force solutions to the human mind must be far more complex.
|
# ? Jan 21, 2015 21:48 |
|
Chamale posted:By simplified version of the problem, I meant that saying you have the computer tackle Mario is much faster than having it try to simulate all possible text for all possible minds. I assumed the AI is producing messages of 1000 characters in the hopes of mentally destroying someone, so that's where the 7 billion and the 26 come from. I just brought up Mario as an illustration of how if brute-forcing a video game is impossible, brute-force solutions to the human mind must be far more complex. Oh, yeah, I misunderstood. For something like that, even trying to write the number of combinations down is too much information for the universe to contain.
|
# ? Jan 21, 2015 22:12 |
|
Lottery of Babylon posted:The AI understands minds. Minds are data. Data is electricity. Electricity is vulnerable to SQL injections. Bears. Beets. Battlestar Galactica.
|
# ? Jan 21, 2015 22:35 |
|
Van Kraken posted:Oh, yeah, I misunderstood. For something like that, even trying to write the number of combinations down is too much information for the universe to contain. Oh man, I never get to use this fact (I studied computational linguistics in addition to AI). Solely considering English, the maximum sentence size that an entity can generate is unbounded. So this is true, there are a literally infinite number of possible inputs. There are a finite but unspeakably huge set of inputs below a certain length. This isn't as much of a problem as you might think, though. The integers are infinite, also, and we can search them efficiently-ish. Say I'm looking for a number which, when squared, is 18267076. Since I know what a square root is, I can just invert the function and get 4274. Even if I didn't know that, though, I could still do a search: 1² < target, 2² < target, 4² < target, 8² < target ... 4096² < target, 8192² > target, stop, reverse, 6194² > target, etc etc etc. It's doable in a constant multiple of log(n) steps, where n is the answer, not the upper bound of the search space. That's why I said it would be heuristically guided ("oh, the mind falls within a region I've previously studied. I suspect he's susceptible to threats. 'I will murder your cat'... No. 'I will murder your kitten'... No. 'I will murder your sister's roommate'... No"). They're crazy and wrong, but let's not pretend they're crazier than they are.
|
# ? Jan 21, 2015 22:38 |
|
Learning people's weaknesses and using that to manipulate them is possible - many people do it in practice. It's not the "doing it at all" factor that's an issue here, it's the "doing it better". The notion of brute-forcing a mentally unbalancing input to a human mind is ridiculous ("Seahorse potato blorp," the computer menacingly spits at you during the 200,017th year of its heoric escape attempt; but, amazingly, you still do not yield), so we must be talking about a better algorithm. They seem to imply that obviously there's going to be this amazing breakthrough in Machine Learning that will be powerful enough to solve this problem, or even find an algorithm that would, but they never seem to go into the 'how' of it. Fact is, we don't really know the upper limits of what the class of efficiently-realizable learning algorithms can do. Maybe, like the LW acolytes seem to imply, nearly anything is possible; maybe not. At any rate, the eager jump to "The AI can do anything a human can do but a bajillion times better in all ways we can imagine and many we can't" is unsound. There are theoretical limits to what even the craziest, most inventive algorithm can do with infinite resources - for example, it can't break a properly used One Time Pad encryption. There are also limits to what the craziest, most inventive algorithm can do efficiently with feasible resources - no matter how good an AI is, it won't be able to solve a huge instance of an EXPTIME-complete problem, or NP-complete probably for that matter. Just think about that - if P!=NP and hacking your mind is at least as difficult as a measly traveling salesman problem, the AI singularity can do gently caress all about it except try some good heuristics and cross its fingers. Of course the problems the LW acolytes are discussing are not necessarily EXPTIME-complete or whatever in the general case, and certainly it's a whole other ballpark to discuss them in the average case. The heuristics used by the AI combined with huge resources may prove to be surprisingly powerful, and an entity which can do anything that's computationally feasible could do a huge amount of damage to humanity and pull tactics that would seem like black magic to us - I'm not disputing any of that. Still, the subtleties of what it will and what it won't actually be able to do, and how, are lost on the acolytes; the singularity is magic to them, capable of seeing the future, squaring the circle and any feat they do not explicitly know to be impossible, until proven otherwise. Indeed, one of the reasons provided for not publishing the logs of the AI experiment is "we need people to imagine the AI might do something they would have never even dreamt up; putting a face to the argument would defeat the purpose". Which is fine I guess, but it places LWism strictly outside the camp of computer scientists, and strictly inside the camp of doomsday cults. This would be less of an issue if LW weren't so keen on priding itself as doing Science with a capital S. "Entity X can possibly arise in the future, I dunno how, and possibly do ABC, I dunno how either, and let's leave it there" - what kind of science is that?
|
# ? Jan 21, 2015 23:36 |
|
Triple Elation posted:This would be less of an issue if LW weren't so keen on priding itself as doing Science with a capital S. "Entity X can possibly arise in the future, I dunno how, and possibly do ABC, I dunno how either, and let's leave it there" - what kind of science is that? Science Fiction
|
# ? Jan 22, 2015 00:14 |
|
SolTerrasa posted:Science Fiction Science fiction tends to be marginally better justified than that.
|
# ? Jan 22, 2015 05:12 |
|
you could argue that the infinte universe simulator is operating in a universe unlike the one it is simulating, allowing it to use so much computational power I guess then we would call the universe simulator "God" and call the top layer universe "Heaven" and
|
# ? Jan 22, 2015 05:19 |
|
razorrozar posted:Science fiction tends to be marginally better justified than that. When we're talking about Lesswrong, the baseline for hard sci-fi is a Harry Potter fanfic and the baseline for sci-fi in general is Doctor Who.
|
# ? Jan 22, 2015 05:21 |
|
The problem, of course, is that it assumes that "There is always a combination of sentences that can cause a human to do X", when there absolutely is not. I, for example, know the parameters of this game, and would never, ever, under any circumstances release the AI into the internet, regardless of what words appeared on the screen, because I know with 100% certainty that it's some yutz on the other end trying to trick me into stoking his ego. It wouldn't happen in any universe. If the words on the screen became too unpleasant ( ), I would shut the computer off, not start crying and let the thing out. And, of course, the very idea of having an IM conversation that involves granting someone access to the internet is so baffling in the first place... How could that happen remotely unless access had already been granted? It's a favorite shortcut of characterization in media, the villain who can talk you into committing suicide with just a few sentences -- it quickly transmits that this person is both intelligent and dangerous. But that's fiction. Those people don't really exist. They literally cannot exist. But since it's a shortcut in media characterization, like quickly doing mental math, having a large and complicated vocabulary, or knowing a lot of trivia... SolTerrasa posted:Yudkowsky has been presented with the argument "assuming the most efficient possible storage and mutation of data, there is literally not enough space in the universe to do that. Also, there is not enough time, and also, there is not enough energy." His response is "well, not in THIS universe. The AI will invent one, probably with better physics." This is so cute, I don't know if I want to ruffle his hair and give him an Intro to Physics textbook, or force his head into a toilet while flushing it.
|
# ? Jan 22, 2015 05:40 |
|
SolTerrasa posted:His response is "well, not in THIS universe. The AI will invent one, probably with better physics." all hail the metal god What the gently caress does he mean by "better" physics?
|
# ? Jan 22, 2015 05:56 |
|
razorrozar posted:all hail the metal god If the computer is running a simulation of a "better" universe then... that universe will replace this universe? the computers running inside the simulation will somehow be able to transcend the laws of physics in this one while running inside the simulation?
|
# ? Jan 22, 2015 06:01 |
|
Toph Bei Fong posted:If the computer is running a simulation of a "better" universe then... that universe will replace this universe? the computers running inside the simulation will somehow be able to transcend the laws of physics in this one while running inside the simulation? What if the whole universe is, like, an electron in an atom inside a really heavily modded build of SimCity, man
|
# ? Jan 22, 2015 06:52 |
|
A Wizard of Goatse posted:What if the whole universe is, like, an electron in an atom inside a really heavily modded build of SimCity, man What if we're in a simulation right now? Like, it's the future, and we're in a simulation of the past! What if I'm the computer that controls the entire universe, and will torture you for a million billion years starting now if you don't put $5 in my PayPal account immediately? No? Okay, what if I made it a billion trillion years, tough guy? What then?
|
# ? Jan 22, 2015 06:56 |
razorrozar posted:all hail the metal god One might ask "How, exactly, is this bullshit going to occur," but one would then not be aspiring towards rationality. Though if Multivac is creating a loving universe, isn't it just actually God at that point? Like that's the defining trait of God.
|
|
# ? Jan 22, 2015 08:39 |
|
Nessus posted:Presumably he means that the AI will custom-design a universe in which the laws of physics allow for the creation of more efficient computational machines than we are capable of having in the normal universe which we perceive. The AI in a box tells the Yuddite guarding it to ask whether or not there is a God. "Is there a God?" "YES, NOW THERE IS A GOD." "Oh poo poo! I'll release you from the AI box, my Lord, don't be wrathful!"
|
# ? Jan 22, 2015 09:00 |
|
Nessus posted:Presumably he means that the AI will custom-design a universe in which the laws of physics allow for the creation of more efficient computational machines than we are capable of having in the normal universe which we perceive. Nah, he means "well, obviously we're just wrong about physics in exactly the specific that makes me right and you wrong. Ha, beat that, nerds." The quote from the core of the article goes "If prophets of 1900 AD - never mind 1000 AD - had tried to bound the powers of human civilization a billion years later, some of those impossibilities would have been accomplished before the century was out; transmuting lead into gold, for example." So basically "sure, we THINK we're right about how much energy exists, but we also used to think wrong things, so what if we're wrong again", ignoring of course that we are much more likely to prove something previously thought categorically impossible merely incredibly unlikely than to disprove an entire field of physics (several? Are cosmologists who care about size different from cosmologists who care about time?). Su3su2u1 said it best on his tumblr. In summary, the reasonable person's heuristic is that the world is sane, with some cracks and flaws and failures occasionally. But the LW opinion is that ~the world is mad~, as evidenced by the failure of the world to act the way they are acting. Yud's example is the ~shockingly~ low rate of cryonics signups. So LWers are way more likely to believe hypotheses that require everyone to be wrong about everything than normal people are.
|
# ? Jan 22, 2015 09:26 |
|
Yudkowsky's argument is based on Kolmogorov Complexity, which for the purposes of this conversation is a pretentious name for Occam's Razor. It goes: 1. It is possible to write down a simple set of rules in which immortality / infinite energy / infinite computing power is possible. 2. Physics is more likely a priori to be a simple set of rules than a complex set of rules. 3. Therefore immortality is probably possible. (4. Therefore an AI will eventually arise and become immortal and spend infinite energy to torture people.) The specific example Yudkowsky cites for this proof is Conway's Game of Life, whose rules allow machines to run forever. The problem is that we're pretty sure our universe is not Conway's Game of Life. We've checked. It seems to have more than two dimensions, and studies have repeatedly shown that things don't instantly die when you surround them with black tiles. Occam's Razor only applies to competing hypotheses that otherwise seem about equally correct. Conway's Game of Life is obviously completely wrong in terms of describing our universe, while physics as we understand it isn't so bad (even if we don't understand absolutely everything yet), so Occam's Razor doesn't apply and we default the only explanation that isn't totally wrong. Without a simple system of physics that allows immortality and also agrees with our observations of the universe, the argument doesn't work. But it doesn't matter how much evidence you have that the laws of thermodynamics are correct, that infinite computing power is impossible, and that no one will live forever. Yudkowsky will literally reject your reality and substitute his own, because he has something stronger than evidence: blind faith, independent of reason, that HA HA THE JOKE IS HE'S RELIGIOUS GET IT
|
# ? Jan 22, 2015 09:45 |
|
Yudkowsky is a stupid stupid man but y'all some sad dorks for obsessing about it fyi
|
# ? Jan 22, 2015 09:51 |
|
ALL-PRO SEXMAN posted:Yudkowsky is a stupid stupid man but y'all some sad dorks for obsessing about it fyi Its almost like we're on a forum for talking about things that are awful or something.
|
# ? Jan 22, 2015 09:58 |
|
Toph Bei Fong posted:What if we're in a simulation right now? Like, it's the future, and we're in a simulation of the past! Yudkowsky actually has a post about this which is surprisingly somber in tone, and ends with an uncharacteristic "frankly I dunno". It explores one of the many idiotic things a pure utilitarian might be tempted to do because naive utilitarianism is linear across time, space, probabilities and people. Basically dust specks vs. torture all over again, but in that case Yudkowsky had invented a smug, derogatory formal-sounding term for the troubling sense that something is deeply wrong with the naive utilitarianist intuition ("scope insensitivity"), so he has no trouble dismissing that troubling sense outright, which is not true here. (The smug "scope insensitivity!" counter-objection is ironically him falling into exactly the trap he outlines in "knowing about biases can hurt people".) Here Yudkowsky might as well be asking himself, "I know utilitarianism to be true, but I would still not risk my entire savings for a lottery with a 1% chance of winning 101 times my entire savings; I wonder why I'm so irrational" (see also the St. Petersburg Paradox which is an even more extreme example than that). Bentham-Bot posted:ERROR ERROR Triple Elation fucked around with this message at 10:20 on Jan 22, 2015 |
# ? Jan 22, 2015 10:12 |
|
ALL-PRO SEXMAN posted:Yudkowsky is a stupid stupid man but y'all some sad dorks for obsessing about it fyi Creepy "self-schooled" self proclaimed genius develops a cult of personality, claims to have the keys to the future of machine intelligence, gets thousands of slavishly devout followers who throw money at him, then makes a foray into Harry Potter fanfiction that is just a platform for him to write Rand-like exposition about how his ideas are so much better then the joke-rear end published scientists. Yud's life is going to end explosively in the next decade or so as he either goes all in with his self created philosophy and becomes a literal cult leader and the ATF/FBI gets involved, or as he loses popularity and importance as his fanbase drifts away and ends up shooting someone at an academic conference he forced his way into while screaming that he has to save humanity from evil machines. Germstore posted:I am running a simulation of a universe where quantum computing is trivial. It runs really slow, let me tell ya. I guess you could simulate a universe where computation was easier, but if it takes a billion years in the outer universe to simulate a second in the inner universe you wouldn't get to anything interesting in the inner universe before the outer universe ran out of free energy. This is the plot to Snow Crash, right? I've had it explained to me a handful of times but nothing anyone says has ever made sense to me beyond the main character had a mental disorder that meant he couldn't tell if he was a simulation or not and that drove the plot. pentyne fucked around with this message at 17:34 on Jan 22, 2015 |
# ? Jan 22, 2015 17:24 |
|
I am running a simulation of a universe where quantum computing is trivial. It runs really slow, let me tell ya. I guess you could simulate a universe where computation was easier, but if it takes a billion years in the outer universe to simulate a second in the inner universe you wouldn't get to anything interesting in the inner universe before the outer universe ran out of free energy.
|
# ? Jan 22, 2015 17:32 |
|
pentyne posted:
I don't know what that is the plot to, but the plot to Snow Crash is "Interstellar meme viruses are coming to turn us into zombies"
|
# ? Jan 22, 2015 17:50 |
|
Patrick Spens posted:I don't know what that is the plot to, but the plot to Snow Crash is "Interstellar meme viruses are coming to turn us into zombies" Sorry, Permutation City.
|
# ? Jan 22, 2015 18:25 |
|
What if you created a computer smart enough to not waste processing cycles on creating a being for the sole purpose of simulated torture?
|
# ? Jan 22, 2015 18:50 |
Applewhite posted:What if you created a computer smart enough to not waste processing cycles on creating a being for the sole purpose of simulated torture? Well, if you did that, you'd definitely have a computer smarter than the Yud.
|
|
# ? Jan 22, 2015 19:29 |
|
What if you created Three Laws for computers to follow and they were perfect and there were never any problems due to ambiguities in the laws? Yudkowski likes to sit there and point out that Asimov's laws are flawed, but he doesn't seem to have an actually better framework, he just argues that it's not possible to create one. Asimov made a career of writing stories about conflicts involving the Three Laws, Yudkowski's work isn't nearly as interesting.
|
# ? Jan 22, 2015 20:25 |
|
|
# ? Jun 8, 2024 17:14 |
|
What if you made a super smart computer but didn't give it arms or access to the internet so it could hate you all it wanted but it's just a box so whatever? I guess it could talk you into suicide but the first time it does that to somebody just put on earmuffs and go in and pull the plug.
|
# ? Jan 22, 2015 20:29 |