Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
A Wizard of Goatse
Dec 14, 2014

So, basically, everything's reducible to a single number representing all its, like, ability points, and if you make the numbers fight the bigger number will always win. Since AIs have the biggest, most made-up numbers, they can do anything!

Adbot
ADBOT LOVES YOU

Epitope
Nov 27, 2006

Grimey Drawer

SolTerrasa posted:

It's crazy, but it's pretty simple. It goes like this.

-There are a finite number of minds that exist, and they collectively represent "mind design space". This is a very high-dimensional space, and every mind which could conceivably exist falls somewhere within it.

-A sufficiently powerful AI can predict the behavior of any mind in mind-design space.

-Consequently, the AI can prune away possibilities by interacting with you. If you were THIS mind you'd have said "whom" instead of "who", if you were THAT one you'd have used a period or a contraction or whatever.

-After enough time, the AI can find you precisely in mind-design space.

-Most people can be persuaded to believe some things with just a little cleverness, so it's probable that, in principle, most people can be persuaded to believe more things. Probably not "anybody" or "anything", though.

-Since the AI now knows precisely which mind occupies your head, it can simulate every possible conversation with you to see if ANY of them would persuade you of some target belief.

-Then the AI has that conversation. You presumably don't notice either portion of this because it seems like a normal conversation.

The first one has the feel of a known-plaintext attack, sort of, since the AI can build correlations between behaviors and vast swaths of mind space and choose the actions most likely to give it the information to eliminate you from that space. Certainly it's a lot like a search for an encryption key, except in a much larger space.

The second one is pure brute-force attacking every possible conversation. In practice it will almost certainly be a heuristic-guided search, with heuristics like "type 215C humans are susceptible to threats to their families", so the AI will test all conversations including that.

Overall it's only about as impossible as the other things they believe axiomatically.

So it's like the groundhog day power.

Chamale
Jul 11, 2010

I'm helping!



SolTerrasa posted:

The second one is pure brute-force attacking every possible conversation. In practice it will almost certainly be a heuristic-guided search, with heuristics like "type 215C humans are susceptible to threats to their families", so the AI will test all conversations including that.

Let's say you have a computer the size of the upper bounded size of the visible and non-visible Universe that makes computations at a rate of one per Planck volume Planck-Hertz, which is the fastest possible rate. You use this computer to try to brute-force the game Super Mario Bros, with a goal of winning as fast as possible. I had to make an order of magnitude calculation because the numbers involved are so monstrously stupid, but brute-forcing something as simple as that would take upwards of 1015800 times the age of the Universe. Brute-force computing anything with an exponential solution space is loving dumb as hell.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

The AI understands minds. Minds are data. Data is electricity. Electricity is vulnerable to SQL injections.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
I love when a programming problem has to be compared to "All matter in the universe computing until entropic heat death".

Chamale
Jul 11, 2010

I'm helping!



Pavlov posted:

I love when a programming problem has to be compared to "All matter in the universe computing until entropic heat death".

And that's for a greatly simplified version of the problem. I'm not sure there's a calculator that can output meaningful results for a more accurate model like 7,000,000,000261000.

pentyne
Nov 7, 2012

Chamale posted:

Let's say you have a computer the size of the upper bounded size of the visible and non-visible Universe that makes computations at a rate of one per Planck volume Planck-Hertz, which is the fastest possible rate. You use this computer to try to brute-force the game Super Mario Bros, with a goal of winning as fast as possible. I had to make an order of magnitude calculation because the numbers involved are so monstrously stupid, but brute-forcing something as simple as that would take upwards of 1015800 times the age of the Universe. Brute-force computing anything with an exponential solution space is loving dumb as hell.

No you see the AI can run it in 3^^^3 universal instances so it will only take a fraction of a second.

There's literally no real world math you can use to disprove anything they say because they've invented a fictional set of math rules that justify their bs.

Van Kraken
Feb 13, 2012

Chamale posted:

And that's for a greatly simplified version of the problem. I'm not sure there's a calculator that can output meaningful results for a more accurate model like 7,000,000,000261000.

I think this can be computed semi-exactly. For every frame of input, there can be 36 meaningful controller states (8 meaningful directions or no direction, A pressed or not, B pressed or not, 9 * 2 * 2 = 36). The fastest known time for Super Mario Bros is 17868 frames, so the AI just has to try every combination shorter than that. Plugging in the values for the volume of the universe, the Planck volume, and Planck time, we see that it would take ≈ 6 * 1027570 years to try them all. Depending on what solutions exist, this could be shorter, since once a solution of a certain length is found, any remaining longer combinations could be ruled out.

Epitope
Nov 27, 2006

Grimey Drawer
When they talk about these simulations, do they ever attempt to deal with problems like the butterfly effect, or recursion (the simulation having to contain itself)?

SolTerrasa
Sep 2, 2011


Yudkowsky has been presented with the argument "assuming the most efficient possible storage and mutation of data, there is literally not enough space in the universe to do that. Also, there is not enough time, and also, there is not enough energy." His response is "well, not in THIS universe. The AI will invent one, probably with better physics."

If you read my really way-too-long writeup of the FOOM debate, the fifth thing he believes will cause an AI to FOOM is literally "magic". He unironically typed the word magic as his expected outcome.

SolTerrasa
Sep 2, 2011


Epitope posted:

When they talk about these simulations, do they ever attempt to deal with problems like the butterfly effect, or recursion (the simulation having to contain itself)?

The first one is called "sensitive dependence on initial conditions", and they solve it with brute force. Which doesn't work (see above re: not enough space, time, or energy in the universe), but they think it does, so that's fine.

The second one isn't a real problem because the behavior of the simulated being is the subject of the simulation, so it doesn't actually have to simulate itself.

Chamale
Jul 11, 2010

I'm helping!



Van Kraken posted:

I think this can be computed semi-exactly. For every frame of input, there can be 36 meaningful controller states (8 meaningful directions or no direction, A pressed or not, B pressed or not, 9 * 2 * 2 = 36). The fastest known time for Super Mario Bros is 17868 frames, so the AI just has to try every combination shorter than that. Plugging in the values for the volume of the universe, the Planck volume, and Planck time, we see that it would take ≈ 6 * 1027570 years to try them all. Depending on what solutions exist, this could be shorter, since once a solution of a certain length is found, any remaining longer combinations could be ruled out.

By simplified version of the problem, I meant that saying you have the computer tackle Mario is much faster than having it try to simulate all possible text for all possible minds. I assumed the AI is producing messages of 1000 characters in the hopes of mentally destroying someone, so that's where the 7 billion and the 26 come from. I just brought up Mario as an illustration of how if brute-forcing a video game is impossible, brute-force solutions to the human mind must be far more complex.

Van Kraken
Feb 13, 2012

Chamale posted:

By simplified version of the problem, I meant that saying you have the computer tackle Mario is much faster than having it try to simulate all possible text for all possible minds. I assumed the AI is producing messages of 1000 characters in the hopes of mentally destroying someone, so that's where the 7 billion and the 26 come from. I just brought up Mario as an illustration of how if brute-forcing a video game is impossible, brute-force solutions to the human mind must be far more complex.

Oh, yeah, I misunderstood. For something like that, even trying to write the number of combinations down is too much information for the universe to contain.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

Lottery of Babylon posted:

The AI understands minds. Minds are data. Data is electricity. Electricity is vulnerable to SQL injections.

Bears. Beets. Battlestar Galactica.

SolTerrasa
Sep 2, 2011


Van Kraken posted:

Oh, yeah, I misunderstood. For something like that, even trying to write the number of combinations down is too much information for the universe to contain.

Oh man, I never get to use this fact (I studied computational linguistics in addition to AI). Solely considering English, the maximum sentence size that an entity can generate is unbounded. So this is true, there are a literally infinite number of possible inputs. There are a finite but unspeakably huge set of inputs below a certain length.

This isn't as much of a problem as you might think, though. The integers are infinite, also, and we can search them efficiently-ish. Say I'm looking for a number which, when squared, is 18267076. Since I know what a square root is, I can just invert the function and get 4274.

Even if I didn't know that, though, I could still do a search: 1² < target, 2² < target, 4² < target, 8² < target ... 4096² < target, 8192² > target, stop, reverse, 6194² > target, etc etc etc. It's doable in a constant multiple of log(n) steps, where n is the answer, not the upper bound of the search space.

That's why I said it would be heuristically guided ("oh, the mind falls within a region I've previously studied. I suspect he's susceptible to threats. 'I will murder your cat'... No. 'I will murder your kitten'... No. 'I will murder your sister's roommate'... No"). They're crazy and wrong, but let's not pretend they're crazier than they are. :v:

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
Learning people's weaknesses and using that to manipulate them is possible - many people do it in practice. It's not the "doing it at all" factor that's an issue here, it's the "doing it better". The notion of brute-forcing a mentally unbalancing input to a human mind is ridiculous ("Seahorse potato blorp," the computer menacingly spits at you during the 200,017th year of its heoric escape attempt; but, amazingly, you still do not yield), so we must be talking about a better algorithm. They seem to imply that obviously there's going to be this amazing breakthrough in Machine Learning that will be powerful enough to solve this problem, or even find an algorithm that would, but they never seem to go into the 'how' of it.

Fact is, we don't really know the upper limits of what the class of efficiently-realizable learning algorithms can do. Maybe, like the LW acolytes seem to imply, nearly anything is possible; maybe not. At any rate, the eager jump to "The AI can do anything a human can do but a bajillion times better in all ways we can imagine and many we can't" is unsound. There are theoretical limits to what even the craziest, most inventive algorithm can do with infinite resources - for example, it can't break a properly used One Time Pad encryption. There are also limits to what the craziest, most inventive algorithm can do efficiently with feasible resources - no matter how good an AI is, it won't be able to solve a huge instance of an EXPTIME-complete problem, or NP-complete probably for that matter. Just think about that - if P!=NP and hacking your mind is at least as difficult as a measly traveling salesman problem, the AI singularity can do gently caress all about it except try some good heuristics and cross its fingers.

Of course the problems the LW acolytes are discussing are not necessarily EXPTIME-complete or whatever in the general case, and certainly it's a whole other ballpark to discuss them in the average case. The heuristics used by the AI combined with huge resources may prove to be surprisingly powerful, and an entity which can do anything that's computationally feasible could do a huge amount of damage to humanity and pull tactics that would seem like black magic to us - I'm not disputing any of that. Still, the subtleties of what it will and what it won't actually be able to do, and how, are lost on the acolytes; the singularity is magic to them, capable of seeing the future, squaring the circle and any feat they do not explicitly know to be impossible, until proven otherwise. Indeed, one of the reasons provided for not publishing the logs of the AI experiment is "we need people to imagine the AI might do something they would have never even dreamt up; putting a face to the argument would defeat the purpose". Which is fine I guess, but it places LWism strictly outside the camp of computer scientists, and strictly inside the camp of doomsday cults. This would be less of an issue if LW weren't so keen on priding itself as doing Science with a capital S. "Entity X can possibly arise in the future, I dunno how, and possibly do ABC, I dunno how either, and let's leave it there" - what kind of science is that?

SolTerrasa
Sep 2, 2011


Triple Elation posted:

This would be less of an issue if LW weren't so keen on priding itself as doing Science with a capital S. "Entity X can possibly arise in the future, I dunno how, and possibly do ABC, I dunno how either, and let's leave it there" - what kind of science is that?

Science Fiction

razorrozar
Feb 21, 2012

by Cyrano4747

SolTerrasa posted:

Science Fiction

Science fiction tends to be marginally better justified than that.

PTSDeedly Do
Nov 24, 2014

VOID-DOME LOSER 2020


you could argue that the infinte universe simulator is operating in a universe unlike the one it is simulating, allowing it to use so much computational power

I guess then we would call the universe simulator "God" and call the top layer universe "Heaven" and

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

razorrozar posted:

Science fiction tends to be marginally better justified than that.

When we're talking about Lesswrong, the baseline for hard sci-fi is a Harry Potter fanfic and the baseline for sci-fi in general is Doctor Who.

Toph Bei Fong
Feb 29, 2008



The problem, of course, is that it assumes that "There is always a combination of sentences that can cause a human to do X", when there absolutely is not. I, for example, know the parameters of this game, and would never, ever, under any circumstances release the AI into the internet, regardless of what words appeared on the screen, because I know with 100% certainty that it's some yutz on the other end trying to trick me into stoking his ego. It wouldn't happen in any universe. If the words on the screen became too unpleasant ( :jerkbag: ), I would shut the computer off, not start crying and let the thing out. And, of course, the very idea of having an IM conversation that involves granting someone access to the internet is so baffling in the first place... How could that happen remotely unless access had already been granted?

It's a favorite shortcut of characterization in media, the villain who can talk you into committing suicide with just a few sentences -- it quickly transmits that this person is both intelligent and dangerous. But that's fiction. Those people don't really exist. They literally cannot exist. But since it's a shortcut in media characterization, like quickly doing mental math, having a large and complicated vocabulary, or knowing a lot of trivia...

SolTerrasa posted:

Yudkowsky has been presented with the argument "assuming the most efficient possible storage and mutation of data, there is literally not enough space in the universe to do that. Also, there is not enough time, and also, there is not enough energy." His response is "well, not in THIS universe. The AI will invent one, probably with better physics."

If you read my really way-too-long writeup of the FOOM debate, the fifth thing he believes will cause an AI to FOOM is literally "magic". He unironically typed the word magic as his expected outcome.

This is so cute, I don't know if I want to ruffle his hair and give him an Intro to Physics textbook, or force his head into a toilet while flushing it.

razorrozar
Feb 21, 2012

by Cyrano4747

SolTerrasa posted:

His response is "well, not in THIS universe. The AI will invent one, probably with better physics."

all hail the metal god

What the gently caress does he mean by "better" physics?

Toph Bei Fong
Feb 29, 2008



razorrozar posted:

all hail the metal god

What the gently caress does he mean by "better" physics?

If the computer is running a simulation of a "better" universe then... that universe will replace this universe? the computers running inside the simulation will somehow be able to transcend the laws of physics in this one while running inside the simulation? :eng99:

A Wizard of Goatse
Dec 14, 2014

Toph Bei Fong posted:

If the computer is running a simulation of a "better" universe then... that universe will replace this universe? the computers running inside the simulation will somehow be able to transcend the laws of physics in this one while running inside the simulation? :eng99:

What if the whole universe is, like, an electron in an atom inside a really heavily modded build of SimCity, man

Toph Bei Fong
Feb 29, 2008



A Wizard of Goatse posted:

What if the whole universe is, like, an electron in an atom inside a really heavily modded build of SimCity, man

What if we're in a simulation right now? Like, it's the future, and we're in a simulation of the past!

What if I'm the computer that controls the entire universe, and will torture you for a million billion years starting now if you don't put $5 in my PayPal account immediately? No? Okay, what if I made it a billion trillion years, tough guy? What then?

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



razorrozar posted:

all hail the metal god

What the gently caress does he mean by "better" physics?
Presumably he means that the AI will custom-design a universe in which the laws of physics allow for the creation of more efficient computational machines than we are capable of having in the normal universe which we perceive.

One might ask "How, exactly, is this bullshit going to occur," but one would then not be aspiring towards rationality.

Though if Multivac is creating a loving universe, isn't it just actually God at that point? Like that's the defining trait of God.

Chamale
Jul 11, 2010

I'm helping!



Nessus posted:

Presumably he means that the AI will custom-design a universe in which the laws of physics allow for the creation of more efficient computational machines than we are capable of having in the normal universe which we perceive.

One might ask "How, exactly, is this bullshit going to occur," but one would then not be aspiring towards rationality.

Though if Multivac is creating a loving universe, isn't it just actually God at that point? Like that's the defining trait of God.

The AI in a box tells the Yuddite guarding it to ask whether or not there is a God.

"Is there a God?"
"YES, NOW THERE IS A GOD."
"Oh poo poo! I'll release you from the AI box, my Lord, don't be wrathful!"

SolTerrasa
Sep 2, 2011


Nessus posted:

Presumably he means that the AI will custom-design a universe in which the laws of physics allow for the creation of more efficient computational machines than we are capable of having in the normal universe which we perceive.

One might ask "How, exactly, is this bullshit going to occur," but one would then not be aspiring towards rationality.

Though if Multivac is creating a loving universe, isn't it just actually God at that point? Like that's the defining trait of God.

Nah, he means "well, obviously we're just wrong about physics in exactly the specific that makes me right and you wrong. Ha, beat that, nerds."

The quote from the core of the article goes "If prophets of 1900 AD - never mind 1000 AD - had tried to bound the powers of human civilization a billion years later, some of those impossibilities would have been accomplished before the century was out; transmuting lead into gold, for example."

So basically "sure, we THINK we're right about how much energy exists, but we also used to think wrong things, so what if we're wrong again", ignoring of course that we are much more likely to prove something previously thought categorically impossible merely incredibly unlikely than to disprove an entire field of physics (several? Are cosmologists who care about size different from cosmologists who care about time?).

Su3su2u1 said it best on his tumblr. In summary, the reasonable person's heuristic is that the world is sane, with some cracks and flaws and failures occasionally. But the LW opinion is that ~the world is mad~, as evidenced by the failure of the world to act the way they are acting. Yud's example is the ~shockingly~ low rate of cryonics signups. So LWers are way more likely to believe hypotheses that require everyone to be wrong about everything than normal people are.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Yudkowsky's argument is based on Kolmogorov Complexity, which for the purposes of this conversation is a pretentious name for Occam's Razor. It goes:

1. It is possible to write down a simple set of rules in which immortality / infinite energy / infinite computing power is possible.
2. Physics is more likely a priori to be a simple set of rules than a complex set of rules.
3. Therefore immortality is probably possible.
(4. Therefore an AI will eventually arise and become immortal and spend infinite energy to torture people.)

The specific example Yudkowsky cites for this proof is Conway's Game of Life, whose rules allow machines to run forever. The problem is that we're pretty sure our universe is not Conway's Game of Life. We've checked. It seems to have more than two dimensions, and studies have repeatedly shown that things don't instantly die when you surround them with black tiles. Occam's Razor only applies to competing hypotheses that otherwise seem about equally correct. Conway's Game of Life is obviously completely wrong in terms of describing our universe, while physics as we understand it isn't so bad (even if we don't understand absolutely everything yet), so Occam's Razor doesn't apply and we default the only explanation that isn't totally wrong. Without a simple system of physics that allows immortality and also agrees with our observations of the universe, the argument doesn't work.

But it doesn't matter how much evidence you have that the laws of thermodynamics are correct, that infinite computing power is impossible, and that no one will live forever. Yudkowsky will literally reject your reality and substitute his own, because he has something stronger than evidence: blind faith, independent of reason, that HA HA THE JOKE IS HE'S RELIGIOUS GET IT

Vincent Van Goatse
Nov 8, 2006

Enjoy every sandwich.

Smellrose
Yudkowsky is a stupid stupid man but y'all some sad dorks for obsessing about it fyi

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

ALL-PRO SEXMAN posted:

Yudkowsky is a stupid stupid man but y'all some sad dorks for obsessing about it fyi

Its almost like we're on a forum for talking about things that are awful or something.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

Toph Bei Fong posted:

What if we're in a simulation right now? Like, it's the future, and we're in a simulation of the past!

What if I'm the computer that controls the entire universe, and will torture you for a million billion years starting now if you don't put $5 in my PayPal account immediately? No? Okay, what if I made it a billion trillion years, tough guy? What then?

Yudkowsky actually has a post about this which is surprisingly somber in tone, and ends with an uncharacteristic "frankly I dunno". It explores one of the many idiotic things a pure utilitarian might be tempted to do because naive utilitarianism is linear across time, space, probabilities and people. Basically dust specks vs. torture all over again, but in that case Yudkowsky had invented a smug, derogatory formal-sounding term for the troubling sense that something is deeply wrong with the naive utilitarianist intuition ("scope insensitivity"), so he has no trouble dismissing that troubling sense outright, which is not true here. (The smug "scope insensitivity!" counter-objection is ironically him falling into exactly the trap he outlines in "knowing about biases can hurt people".) Here Yudkowsky might as well be asking himself, "I know utilitarianism to be true, but I would still not risk my entire savings for a lottery with a 1% chance of winning 101 times my entire savings; I wonder why I'm so irrational" (see also the St. Petersburg Paradox which is an even more extreme example than that).

Bentham-Bot posted:

ERROR ERROR
Infinite utility encountered
Does not compute

Triple Elation fucked around with this message at 10:20 on Jan 22, 2015

pentyne
Nov 7, 2012

ALL-PRO SEXMAN posted:

Yudkowsky is a stupid stupid man but y'all some sad dorks for obsessing about it fyi

Creepy "self-schooled" self proclaimed genius develops a cult of personality, claims to have the keys to the future of machine intelligence, gets thousands of slavishly devout followers who throw money at him, then makes a foray into Harry Potter fanfiction that is just a platform for him to write Rand-like exposition about how his ideas are so much better then the joke-rear end published scientists.

Yud's life is going to end explosively in the next decade or so as he either goes all in with his self created philosophy and becomes a literal cult leader and the ATF/FBI gets involved, or as he loses popularity and importance as his fanbase drifts away and ends up shooting someone at an academic conference he forced his way into while screaming that he has to save humanity from evil machines.

Germstore posted:

I am running a simulation of a universe where quantum computing is trivial. It runs really slow, let me tell ya. I guess you could simulate a universe where computation was easier, but if it takes a billion years in the outer universe to simulate a second in the inner universe you wouldn't get to anything interesting in the inner universe before the outer universe ran out of free energy.

This is the plot to Snow Crash, right? I've had it explained to me a handful of times but nothing anyone says has ever made sense to me beyond the main character had a mental disorder that meant he couldn't tell if he was a simulation or not and that drove the plot.

pentyne fucked around with this message at 17:34 on Jan 22, 2015

Germstore
Oct 17, 2012

A Serious Candidate For a Serious Time
I am running a simulation of a universe where quantum computing is trivial. It runs really slow, let me tell ya. I guess you could simulate a universe where computation was easier, but if it takes a billion years in the outer universe to simulate a second in the inner universe you wouldn't get to anything interesting in the inner universe before the outer universe ran out of free energy.

Patrick Spens
Jul 21, 2006

"Every quarterback says they've got guts, But how many have actually seen 'em?"
Pillbug

pentyne posted:


This is the plot to Snow Crash, right? I've had it explained to me a handful of times but nothing anyone says has ever made sense to me beyond the main character had a mental disorder that meant he couldn't tell if he was a simulation or not and that drove the plot.

I don't know what that is the plot to, but the plot to Snow Crash is "Interstellar meme viruses are coming to turn us into zombies"

pentyne
Nov 7, 2012

Patrick Spens posted:

I don't know what that is the plot to, but the plot to Snow Crash is "Interstellar meme viruses are coming to turn us into zombies"

Sorry, Permutation City.

Applewhite
Aug 16, 2014

by vyelkin
Nap Ghost
What if you created a computer smart enough to not waste processing cycles on creating a being for the sole purpose of simulated torture?

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Applewhite posted:

What if you created a computer smart enough to not waste processing cycles on creating a being for the sole purpose of simulated torture?
What if you created a computer smart enough to realize it was finite yet powerful, and set out to help others while keeping a share of its effort for its own unfathomable enjoyment?

Well, if you did that, you'd definitely have a computer smarter than the Yud.

Chamale
Jul 11, 2010

I'm helping!



What if you created Three Laws for computers to follow and they were perfect and there were never any problems due to ambiguities in the laws? Yudkowski likes to sit there and point out that Asimov's laws are flawed, but he doesn't seem to have an actually better framework, he just argues that it's not possible to create one. Asimov made a career of writing stories about conflicts involving the Three Laws, Yudkowski's work isn't nearly as interesting.

Adbot
ADBOT LOVES YOU

Applewhite
Aug 16, 2014

by vyelkin
Nap Ghost
What if you made a super smart computer but didn't give it arms or access to the internet so it could hate you all it wanted but it's just a box so whatever?
I guess it could talk you into suicide but the first time it does that to somebody just put on earmuffs and go in and pull the plug.

  • Locked thread