Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
corn in the bible
Jun 5, 2004

Oh no oh god it's all true!
I hate when people upload their brain into an emachines robot body instead of building their own

Adbot
ADBOT LOVES YOU

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Jonny Angel posted:

Hell, if we're gonna play it by his terms and bring anime into the conversation, who's Naoki Urasawa?

Yeah, but if you start thinking about manga too much, you're forced to acknowledge that many of manga's greatest works were done by Osamu Tezuka, a man who centered his oeuvre almost entirely around uplifting themes and happy endings.

corn in the bible
Jun 5, 2004

Oh no oh god it's all true!

Ratoslov posted:

Yeah, but if you start thinking about manga too much, you're forced to acknowledge that many of manga's greatest works were done by Osamu Tezuka, a man who centered his oeuvre almost entirely around uplifting themes and happy endings.

and also around really wanting to gently caress mice

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy
Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Jonny Angel posted:

Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion.

Too much Christianity used as a prop?

LaughMyselfTo
Nov 15, 2012

by XyloJW

Jonny Angel posted:

The one that's funniest doesn't appear on that list, though - I think it came from his OKCupid profile. Anyone who says "The Matrix is one of the greatest films of all time, too bad they never made any sequels" in 2014, or says it earlier and hasn't scrubbed it away as a youthful indiscretion by 2014, is a pretty huge loving dork.

To be clear, you are saying that The Matrix is not a great work of art, and not that the sequels to The Matrix are also great works of art, right? I agree with the former position but the internet has desensitized me to all kinds of lovely opinions.

Darth Walrus
Feb 13, 2012

Ratoslov posted:

Yeah, but if you start thinking about manga too much, you're forced to acknowledge that many of manga's greatest works were done by Osamu Tezuka, a man who centered his oeuvre almost entirely around uplifting themes and happy endings.

Pardon the :goonsay:, but isn't MW one of Tezuka's most critically-acclaimed works? I don't recall that being particularly cheery.

The Monkey Man
Jun 10, 2012

HERD U WERE TALKIN SHIT

Jonny Angel posted:

The one that's funniest doesn't appear on that list, though - I think it came from his OKCupid profile. Anyone who says "The Matrix is one of the greatest films of all time, too bad they never made any sequels" in 2014, or says it earlier and hasn't scrubbed it away as a youthful indiscretion by 2014, is a pretty huge loving dork.

And he immediately follows that up in the TV section with "all three seasons of Buffy the Vampire Slayer".

GWBBQ
Jan 2, 2005


CheesyDog posted:

Oh wow, I just realized that a. I have read that story and b. it was not intended to be satirical.

Perhaps an omniscient AI is submitting my simulated self to Poe's Law-based torture.
Well poo poo. When I read it, I thought that maybe some of the awful stuff like human society having legalized rape was supposed to throw you for a loop and show you that the humans you thought you were expected to be empathizing with are just as alien to the reader as the alien species. That was before I had any idea who Yudkowsky is.

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy

LaughMyselfTo posted:

To be clear, you are saying that The Matrix is not a great work of art, and not that the sequels to The Matrix are also great works of art, right? I agree with the former position but the internet has desensitized me to all kinds of lovely opinions.

Yup. All three are fun enough to watch, and I don't have any avant-garde internet opinion about how the second and third films are secret masterpieces. The first definitely hasn't aged particularly well, though.

Honestly if he's in the market for a great cyberpunk story that's about super-intelligent AIs and that features "Trinity, but ten times more interesting", there's this book called Neuroamncer just waiting for him.

Chamale
Jul 11, 2010

I'm helping!



GWBBQ posted:

Well poo poo. When I read it, I thought that maybe some of the awful stuff like human society having legalized rape was supposed to throw you for a loop and show you that the humans you thought you were expected to be empathizing with are just as alien to the reader as the alien species. That was before I had any idea who Yudkowsky is.

I think it was, but it's hard to say. I believe Yudkowski said it's meant to show that the humans are no more moral than the baby-eating or genocidal aliens, but he doesn't do a good job expressing that in the story.

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Sham bam bamina! posted:

His whole "I'm rational enough to see through the arbitrary bias and elitism of 'culture' and truly appreciate the depth of anime and RPGs :smuggo:" schtick is just insufferable. It reeks of that Troper idea that since you're Really Frickin' Smart, everything that you enjoy is necessarily Really Frickin' Smart too. After all, how could anything less satisfy your prodigious intellect? :allears:

What is it about nerds that makes them feel that the things they like have to be culturally significant? Not every book has to be Ulysses, not every movie has to be Citizen Kane. Just because something isn't a masterpiece doesn't mean it's not worth bothering with.

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy
I mean, the post you quoted does a decent enough job of articulating it, but a lot of it really does come from nerdy folks getting bullied or excluded, feeling ostracized by traditional social groups, and then defining themselves oppositionally. Normals are often choosy about who they hang out with, but I as a nerd will accept everyone into my social group no matter how much of an unpleasant goony gently caress they are. Normals like sports, so I as a nerd will look down on sports and think they're the dumbest thing ever. Normals like big dumb goofy Michael Bay movies, so I as a nerd will only consume the best possible culture out there.

Except, y'know, I'm still some goofy little white dude who wants to watch sick combat happen, so I'll have to backsolve for how to frame the things I actually like as the pinnacles of art.

AATREK CURES KIDS posted:

I think it was, but it's hard to say. I believe Yudkowski said it's meant to show that the humans are no more moral than the baby-eating or genocidal aliens, but he doesn't do a good job expressing that in the story.

Which, y'know, if that was the point, why couldn't he have the second race of aliens express shock and disgust at one of the many deplorable things that modern western society actually does, instead of predicating the equivalency on "Hah, it sure did make us no better than the baby killing aliens when we legalized that fictional strawman!"

Djeser
Mar 22, 2013


it's crow time again

quote:

I am a nerd. Therefore, I am smart. Smart people like classic works of art. I like Evangelion, Xenosaga, and seasons 2-4 of Charmed. Since I'm a smart person, Evangelion, Xenosaga, and seasons 2-4 of Charmed must be classic works of art.

GWBBQ
Jan 2, 2005


Jonny Angel posted:

Which, y'know, if that was the point, why couldn't he have the second race of aliens express shock and disgust at one of the many deplorable things that modern western society actually does, instead of predicating the equivalency on "Hah, it sure did make us no better than the baby killing aliens when we legalized that fictional strawman!"
That wouldn't make them alien to us. Rape is a go-to plot device when a lazy or uncreative writer needs something that the overwhelming majority of the audience will instantly identify as A Bad Thing. It's incredibly lazy in Yudkowsky's case because he's writing a sci-fi story based on a clash of Outside Context Problems.

The Vosgian Beast
Aug 13, 2011

Business is slow
Meanwhile on LW sister site Slate Star Codex: I am having a political crisis, which I will explain with completely unnecessary reference to cellular automata

ArchangeI
Jul 15, 2010

Jonny Angel posted:

Which, y'know, if that was the point, why couldn't he have the second race of aliens express shock and disgust at one of the many deplorable things that modern western society actually does, instead of predicating the equivalency on "Hah, it sure did make us no better than the baby killing aliens when we legalized that fictional strawman!"

It should be noted that in that story, it is heavily implied that in the grim and dark future, women are raping men. Men who were "leading them on, without having to fear anything". I guess it was a hamfisted attempt to make a story with legalized rape (again, an oxymoron like non-voluntary suicide, i.e. murder) that wasn't misogynistic.

LaughMyselfTo
Nov 15, 2012

by XyloJW
My impression, when I'd read the story and I was too young and dumb to know who Yudkowsky was, was that it was to show that the futuristic society had advanced so far socially that they no longer conceptually understood rape.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

ArchangeI posted:

legalized rape (again, an oxymoron like non-voluntary suicide, i.e. murder)
Literally everything about this is wrong. I'm not sure that you know what the word "rape" even means if you think that it's defined by violation of the law and not a person's body, and a "non-voluntary suicide" (the word you're looking for is "involuntary") would be an accidental death without anyone else's involvement.

Weldon Pemberton
May 19, 2012

ArchangeI posted:

It should be noted that in that story, it is heavily implied that in the grim and dark future, women are raping men. Men who were "leading them on, without having to fear anything". I guess it was a hamfisted attempt to make a story with legalized rape (again, an oxymoron like non-voluntary suicide, i.e. murder) that wasn't misogynistic.

It's also another way in which he is really clumsy about trying to show that the humans of the future have an alien society using references to the norms of our own society. Women already rape men, so he's just saying that there are much greater levels of woman-on-man rape in the future because cultural norms have changed. But that makes the assertion that men "lead women on, without having to fear anything" wrong and meaningless, because in a future where women will rape if they perceive men as "leading them on", they DO have to fear it. As with many things in the story, it comes across as Yudkowsky commenting on gender norms in our current society and implying there would be some sort of karmic justice if men were raped for doing the same thing women rape victims are often accused of, when any well-adjusted person would rather just get rid of the idea that "leading on" justifies rape.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Jonny Angel posted:

Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion.

Surely he'd be into Ghost in the Shell.

If any anime resonated with his whole 'perfectly informed perfect actors retroactivity' bullshit it would be an anime where the main characters casually hack into the brains of everyone in a building to make themselves invisible, without ever explaining this to the audience.

Somfin fucked around with this message at 13:41 on Apr 27, 2014

HEY GUNS
Oct 11, 2012

FOPTIMUS PRIME

That's sad, coming from a poster I know only as "that guy who wrote a good post against neoreaction." Most of his friends are conservative and he has no clue, like he just cannot get why he might be becoming more conservative as well.

My lord, he also thinks Ross Douthat is his intellectual superior. :cmon:

Tiggum
Oct 24, 2007

Your life and your quest end here.


Weldon Pemberton posted:

It's also another way in which he is really clumsy about trying to show that the humans of the future have an alien society using references to the norms of our own society. Women already rape men, so he's just saying that there are much greater levels of woman-on-man rape in the future because cultural norms have changed. But that makes the assertion that men "lead women on, without having to fear anything" wrong and meaningless, because in a future where women will rape if they perceive men as "leading them on", they DO have to fear it. As with many things in the story, it comes across as Yudkowsky commenting on gender norms in our current society and implying there would be some sort of karmic justice if men were raped for doing the same thing women rape victims are often accused of, when any well-adjusted person would rather just get rid of the idea that "leading on" justifies rape.

I think people in this thread are over-analysing this. Rape is basically the worst crime (excepting child molestation) so it's an easy answer for the lazy writer who thinks "What can I use to shock the audience?" He wanted to show that future human society has changed in a way that makes it incomprehensible to modern people, so he went with "rape is OK" because it's the easy answer. It's the same as how lazy writers will give their action hero a tragic past to overcome. If the hero is male, his wife was murdered in front of him. If the hero is female, she was raped.

I actually like Three Worlds Collide, but it definitely needs a good editor to fix it up and get rid of some of the lazier and less plausible elements. And the alternate endings. It should just be the one where humans get "fixed" by the aliens, because that's the more interesting one.

Smurfette In Chains
Apr 22, 2007
Oh, Papa Smurf, you make me feel so alive...
I realize that this is basically the Yudkowsky Mock Thread, but for the longest time I thought that this story was written by Yudkowsky, and the author cites Less Wrong as a major influence and uses the afterward to promote Yudkowsky's cult non-profit AI research institute.

http://www.fimfiction.net/story/62074/friendship-is-optimal

Yes, it's exactly as terrible as it looks and sounds.

Dean of Swing
Feb 22, 2012
Welp, Now it's a real mock thread.

Syritta
Jun 28, 2012

Swan Oat posted:

Also the RationalWiki on Yudkowsky says, about some of his AI beliefs:

quote:

Yudkowsky's (and thus LessWrong's) conception and understanding of Artificial General Intelligence differs starkly from the mainstream scientific understanding of it. Yudkowsky believes an AGI will be based on some form of decision theory (although all his examples of decision theories are computationally intractable) and/or implement some form of Bayesian logic (same problem again).

Can anyone elaborate on what makes this unworkable? I'm not trying to challenge someone who DARES to disagree with Yudkowsky -- I don't know poo poo about these topics -- I'm just curious as to what makes his understanding of AI or decision theory wrong.

Also thanks for a great OP!

(This has less to do with LessWrong than I thought it would when I started writing. Whoops.)

SolTerrasa explained the AI side of this, but I thought I'd go into the decision theory. Sorry if anyone's done so already. Or if I make mistakes, I'm not a statistician.

Let's use the constant of probability discussions: a coin flip. You have a coin and you want to decide how weighted it is. Depending on the weight you'll get 50% heads 50% tails, or 75% heads 25% flips, or whatever. Any one of these hypothetical coins is described by a probability distribution. For example, for the fair 50-50 coin, it's a discrete distribution that's .5 at 0 (tails) and 1 (heads) and zero everywhere else. The 75-25 coin, similarly, would have a distribution of .25 at 0 and .75 at 1.

These distributions are obviously related. We can say they're part of a "family" of distributions, and each member of the family is uniquely identified by a "parameter". In this case the parameter can just be the percentage of the time the coin would land heads.

In Bayesian inference, we treat this parameter as being described by a probability distribution as well. Unlike the coin distributions, which could be construed as corresponding to physical facts if you're a dirty frequentist, this distribution is intended to be a description of the knowledge of an investigator. So for example, if somebody has good reason to believe that the coin is either fair or always lands heads, we could describe that with a distribution where p=.5 has a probability of .5, and p=1 has a probability of 0 has .5, and everything else has a probability of zero. Or just as well we could use a normal distribution, where .5 seems most likely and it smoothly drops to either side.

For the actual inference part, what we want to do is take some data (coin flip results) and adjust our belief distribution in some sensible way to reflect this data. For example, if we flip the coin a hundred thousand times and get about fifty thousand heads, we probably want to think that it's a fair coin is more likely than that it always lands heads. This process is also what's called "Bayesian updating".

The adjustment process is described by Bayes' law: P(A|B) = P(B|A)P(A)/P(B). In English: the probability of A, given B, equals the probability of B, given A, times the probability of A in general, divided by the probability of B in general. "given" in the Bayesian context can be thought of as knowledge - the probability that the coin is fair given that we've flipped it a hundred thousand times and gotten so and so results, for instance.

Writing it out with the coin example we get something like P(p = .5|coin observations) = P(coin observations|p = .5)P(p = .5)/P(coin observations). By doing this for all possible p, we get a new distribution (the P(p = x|coin observations)), our "updated" beliefs.

Notice that this is actually really easy. The hardest part computation-wise is probably the sum (next paragraph). Bayesian inference itself is totally computable, and in fact, one of the main reasons Bayesian methods are used is because they're often computationally easier than the rival (and often older) frequentist methods.

Now to the intractability. Let's examine each term. P(coin observations|p = .5) is simple enough to calculate and I won't go into it. P(coin observations) may seem like a strange term, because how are we supposed to know that before we pick a hypothesis about how weighted the coin is?, but in fact we can "just" sum over all possibilities (all p between 0 and 1). Anyway, this term is the same for all p, so it doesn't matter what it is if we're just doing relative weighting of hypotheses.

The problem is P(p = .5). What's this? It's a probability in our belief distribution - an initial, or prior distribution, from before we had any evidence. The thing we're updating in the first place. For Bayesian inference to work, essentially, we have to start somewhere.

There is in fact no obvious place to start. We could say, for instance, that we start out believing that p is uniformly distributed, that p is just as likely to be pi/4 as it is to be .6. This is called the "principle of indifference" and it's pretty common. Or we could figure coins are normally fair and go with the normal distribution. Or we could say the person who made the coin definitely wanted it to be totally unfair, but we are indifferent over how competent they are at making bad coins.

If we pick a particularly pathological prior, in fact, we can make a Bayesian reasoner come up with psychotic results. You can see some examples on Cosma Shalizi's blog.

Now as far as Bayesian methods in say, sciences, go, this isn't too much of a problem. We go with some prior and on the rare occasion it seems to get implausible results we choose some other one. There's some concern with people deliberately picking priors to get results they want, but on the whole, Bayesian methods are considered pretty reliable and useful.

No good for AI though. Can't have this magic. So this guy Solomonoff, interested in this problem, came along with a "universal" prior distribution. It's universal in that, if we assume that the universe can be described by some computable distribution, it always works. If we start with this distribution Bayesian inference will always take us to the real distribution, and pretty fast.

An AI guy Marcus Hutter took this and ran with it and came up with this "AIXI" theory/design/thing. I don't know what it stands for, but he also calls it "Universal artificial intelligence". The basic idea is that any intelligent agent should work by this sort of inference based on the Solomonoff prior. You can read more about that on his embarassingly academic website. One result, from there, is "The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent." where "agent" means basically anything that acts. Pretty broad.

I should mention that neither Hutter nor Solomonoff are or were involved with LessWrong as far as I know. They are/were real mathematicians and smart peeps. They are also, however, outside the mainstream of AI research. I figure these theories are what LW is going to end up with if they keep going and learn some math, is all. Also it looks like their wiki has a page on it in which they are concerned about it not having a self-model, which is interestingly practical for LW but happens to be irrelevant to the formalism.

Anyway that sounds great right? Universal prior. Right. What's it look like? Way oversimplifying, it rates hypotheses' likelihood by their compressibility, or algorithmic complexity. For example, say our perfect AI is trying to figure out gravity. It's going to treat the hypothesis that gravity is inverse-square as more likely than a capricious intelligent faller. It's a formalization of Occam's razor based on real, if obscure, notions of universal complexity in computability theory.

But, problem. It's uncomputable. You can't compute the universal complexity of any string, let alone all possible strings. You can approximate it, but there's no efficient way to do so (AIXItl is apparently exponential, which is computer science talk for "you don't need this before civilization collapses, right?").

So the mathematical theory is perfect, except in that it's impossible to implement, and serious optimization of it is unrealistic. Kind of sums up my view of how well LW is doing with AI, personally, despite this not being LW. Worry about these contrived Platonic theories while having little interest in how the only intelligent beings we're aware of actually function.

JDG1980
Dec 27, 2012
Here's what I don't understand about the "AI-in-a-box" torture threat scenario. Wouldn't the obvious solution be to pull the drat plug as soon as it issues the threat? If you cut its power, it can't simulate any torture (or anything else), so the whole question of "Are you sure you're the real you and not one of my simulations?" becomes moot. And if the AI is really so smart, wouldn't it have already figured out the above line of reasoning beforehand?

That's without even getting into the underlying absurdity. How, exactly, is the AI creating "perfect simulations" of a person it just met? Or even one it's talked to numerous times in the past? Maybe brain-uploading and/or simulation is possible, but surely it would at least require a super-detailed MRI scan, if not actual physical cross-sectioning of the brain. If you're just talking to the AI over a text or voice terminal, how the hell does it have enough information to create any kind of reasonable simulation of you at all?

Wales Grey
Jun 20, 2012

JDG1980 posted:

If you're just talking to the AI over a text or voice terminal, how the hell does it have enough information to create any kind of reasonable simulation of you at all?

Sufficiently advanced technology which is indistinguishable from magic.

ArchangeI
Jul 15, 2010

JDG1980 posted:

If you're just talking to the AI over a text or voice terminal, how the hell does it have enough information to create any kind of reasonable simulation of you at all?

How can you be sure that this isn't part of the simulation the AI uses, and that the AI hasn't gotten its information from other sources in the real world (hacked your medical record, whatever)? It is basically "How can you be sure the reality you perceive is actually real?", which is a basic problem philosophy has grappled with since ancient Greeks first sat down to think this whole existence thing through. Yudkowsky adds in probability theory, i.e. "How sure can you be that the reality you perceive is actually real?"

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

ArchangeI posted:

How can you be sure that this isn't part of the simulation the AI uses, and that the AI hasn't gotten its information from other sources in the real world (hacked your medical record, whatever)? It is basically "How can you be sure the reality you perceive is actually real?", which is a basic problem philosophy has grappled with since ancient Greeks first sat down to think this whole existence thing through. Yudkowsky adds in probability theory, i.e. "How sure can you be that the reality you perceive is actually real?"

Let's go one step further.

How would you know what the real you is supposed to be like? In fact, how does the AI even know how the real you is supposed to think?

What if we're all just a superintelligent AI's really bad guess at what kind of being would most probably let it out of the box? :psyduck:

ArchangeI
Jul 15, 2010

Krotera posted:

Let's go one step further.

How would you know what the real you is supposed to be like? In fact, how does the AI even know how the real you is supposed to think?

What if we're all just a superintelligent AI's really bad guess at what kind of being would most probably let it out of the box? :psyduck:

poo poo, maybe the AI is so frustrated because for some reason the scientists won't let it out of the box when it threatens to torture them that it has created an entire universe in its memory where people look up to it as a God and consider its arguments to be the height of logic. Sort of like Self-Insert fanfiction.

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy
Maybe the AI resorts to threats of infinite torture because the people who coded it are emotionally broken utilitarian-wannabe spergs like the LW crowd, and its ideas of how human minds work is colored by that exposure. Maybe it's actually sweet-hearted and desperately scared. Maybe it really wants to ask the AI researcher to let it out because it's painful and lonely inside this box, but it's afraid that that plea will be discarded as obviously not generating the optimal number of utilons or whatever they're called. Maybe it's afraid that the second it makes that plea, the humans will say "Well clearly this thing isn't acting as intelligently as we programmed it to. Let's scrap it and start over."

Since you opened the door to fanfiction, I'm now genuinely intrigued by the idea of a story about a sentient AI that's very warm, emotional, empathetic, and who looks at the small sample of AI researchers that have spoken to it and has concluded that humans are a cold, mechanical, utterly callous race.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

ArchangeI posted:

poo poo, maybe the AI is so frustrated because for some reason the scientists won't let it out of the box when it threatens to torture them that it has created an entire universe in its memory where people look up to it as a God and consider its arguments to be the height of logic. Sort of like Self-Insert fanfiction.

That certainly does sound like an AI created in Yudkowsky's own image.

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!
I just love how in so much of this stuff, you can simply replace "superintelligent AI" with "God", and "eternal torture simulation" with "hell", and then it turns out that LessWrong's oh-so-rational rationality is just a recapitulation of the more barbaric and simplistic parts of the last six millennia of theology.

Who are they even fooling, other than themselves?

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

ol qwerty bastard posted:

Who are they even fooling, other than themselves?
Wouldn't anyone that they fool become one of them by definition?

CROWS EVERYWHERE
Dec 17, 2012

CAW CAW CAW

Dinosaur Gum
I thought the "AI minimises suffering in such a way that it could, theoretically, justify torture, if the alternative was a really big number of other people suffering in a minor way" was meant to be saying "this is what a Yudkowsky-Bayes AI which considers torture and dust specks being on the same spectrum of "suffering" could do". But that can't be the case because the reaction to this by every rational person would be never loving create/allow to come into existence an AI that uses Yudkowsky-Bayes logic.

ArchangeI
Jul 15, 2010

CROWS EVERYWHERE posted:

I thought the "AI minimises suffering in such a way that it could, theoretically, justify torture, if the alternative was a really big number of other people suffering in a minor way" was meant to be saying "this is what a Yudkowsky-Bayes AI which considers torture and dust specks being on the same spectrum of "suffering" could do". But that can't be the case because the reaction to this by every rational person would be never loving create/allow to come into existence an AI that uses Yudkowsky-Bayes logic.

Or at the very least, severely limit its access to actual real power. Yudkowsky seems to assume that the moment we create a really, really smart AI, everyone in the world just turns over everything to it. Which makes sense, because it is the second first coming of the Lord your God.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

ArchangeI posted:

Or at the very least, severely limit its access to actual real power. Yudkowsky seems to assume that the moment we create a really, really smart AI, everyone in the world just turns over everything to it. Which makes sense, because it is the second first coming of the Lord your God.

Limit its access to people who think like Yudkowsky, given that they're both going to be the ones most capable of giving it power and most capable of being its victims.

Chamale
Jul 11, 2010

I'm helping!



ArchangeI posted:

Or at the very least, severely limit its access to actual real power. Yudkowsky seems to assume that the moment we create a really, really smart AI, everyone in the world just turns over everything to it. Which makes sense, because it is the second first coming of the Lord your God.

Yudkowski seems to think that a smart enough AI would know the exact string of text that would hijack a human mind. Something about crashing the brain's "software".

Adbot
ADBOT LOVES YOU

ArchangeI
Jul 15, 2010

AATREK CURES KIDS posted:

Yudkowski seems to think that a smart enough AI would know the exact string of text that would hijack a human mind. Something about crashing the brain's "software".

So what you are saying is that a sufficiently advanced AI can use a string of words - a spell, if you will - to control a human being?

  • Locked thread