|
I hate when people upload their brain into an emachines robot body instead of building their own
|
# ? Apr 26, 2014 18:58 |
|
|
# ? Jun 8, 2024 15:16 |
|
Jonny Angel posted:Hell, if we're gonna play it by his terms and bring anime into the conversation, who's Naoki Urasawa? Yeah, but if you start thinking about manga too much, you're forced to acknowledge that many of manga's greatest works were done by Osamu Tezuka, a man who centered his oeuvre almost entirely around uplifting themes and happy endings.
|
# ? Apr 26, 2014 19:12 |
|
Ratoslov posted:Yeah, but if you start thinking about manga too much, you're forced to acknowledge that many of manga's greatest works were done by Osamu Tezuka, a man who centered his oeuvre almost entirely around uplifting themes and happy endings. and also around really wanting to gently caress mice
|
# ? Apr 26, 2014 19:15 |
|
Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion.
|
# ? Apr 26, 2014 19:50 |
|
Jonny Angel posted:Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion. Too much Christianity used as a prop?
|
# ? Apr 26, 2014 21:06 |
|
Jonny Angel posted:The one that's funniest doesn't appear on that list, though - I think it came from his OKCupid profile. Anyone who says "The Matrix is one of the greatest films of all time, too bad they never made any sequels" in 2014, or says it earlier and hasn't scrubbed it away as a youthful indiscretion by 2014, is a pretty huge loving dork. To be clear, you are saying that The Matrix is not a great work of art, and not that the sequels to The Matrix are also great works of art, right? I agree with the former position but the internet has desensitized me to all kinds of lovely opinions.
|
# ? Apr 26, 2014 21:16 |
|
Ratoslov posted:Yeah, but if you start thinking about manga too much, you're forced to acknowledge that many of manga's greatest works were done by Osamu Tezuka, a man who centered his oeuvre almost entirely around uplifting themes and happy endings. Pardon the , but isn't MW one of Tezuka's most critically-acclaimed works? I don't recall that being particularly cheery.
|
# ? Apr 26, 2014 21:25 |
|
Jonny Angel posted:The one that's funniest doesn't appear on that list, though - I think it came from his OKCupid profile. Anyone who says "The Matrix is one of the greatest films of all time, too bad they never made any sequels" in 2014, or says it earlier and hasn't scrubbed it away as a youthful indiscretion by 2014, is a pretty huge loving dork. And he immediately follows that up in the TV section with "all three seasons of Buffy the Vampire Slayer".
|
# ? Apr 26, 2014 22:05 |
|
CheesyDog posted:Oh wow, I just realized that a. I have read that story and b. it was not intended to be satirical.
|
# ? Apr 26, 2014 22:30 |
|
LaughMyselfTo posted:To be clear, you are saying that The Matrix is not a great work of art, and not that the sequels to The Matrix are also great works of art, right? I agree with the former position but the internet has desensitized me to all kinds of lovely opinions. Yup. All three are fun enough to watch, and I don't have any avant-garde internet opinion about how the second and third films are secret masterpieces. The first definitely hasn't aged particularly well, though. Honestly if he's in the market for a great cyberpunk story that's about super-intelligent AIs and that features "Trinity, but ten times more interesting", there's this book called Neuroamncer just waiting for him.
|
# ? Apr 26, 2014 22:42 |
|
GWBBQ posted:Well poo poo. When I read it, I thought that maybe some of the awful stuff like human society having legalized rape was supposed to throw you for a loop and show you that the humans you thought you were expected to be empathizing with are just as alien to the reader as the alien species. That was before I had any idea who Yudkowsky is. I think it was, but it's hard to say. I believe Yudkowski said it's meant to show that the humans are no more moral than the baby-eating or genocidal aliens, but he doesn't do a good job expressing that in the story.
|
# ? Apr 26, 2014 23:11 |
|
Sham bam bamina! posted:His whole "I'm rational enough to see through the arbitrary bias and elitism of 'culture' and truly appreciate the depth of anime and RPGs " schtick is just insufferable. It reeks of that Troper idea that since you're Really Frickin' Smart, everything that you enjoy is necessarily Really Frickin' Smart too. After all, how could anything less satisfy your prodigious intellect? What is it about nerds that makes them feel that the things they like have to be culturally significant? Not every book has to be Ulysses, not every movie has to be Citizen Kane. Just because something isn't a masterpiece doesn't mean it's not worth bothering with.
|
# ? Apr 26, 2014 23:12 |
|
I mean, the post you quoted does a decent enough job of articulating it, but a lot of it really does come from nerdy folks getting bullied or excluded, feeling ostracized by traditional social groups, and then defining themselves oppositionally. Normals are often choosy about who they hang out with, but I as a nerd will accept everyone into my social group no matter how much of an unpleasant goony gently caress they are. Normals like sports, so I as a nerd will look down on sports and think they're the dumbest thing ever. Normals like big dumb goofy Michael Bay movies, so I as a nerd will only consume the best possible culture out there. Except, y'know, I'm still some goofy little white dude who wants to watch sick combat happen, so I'll have to backsolve for how to frame the things I actually like as the pinnacles of art. AATREK CURES KIDS posted:I think it was, but it's hard to say. I believe Yudkowski said it's meant to show that the humans are no more moral than the baby-eating or genocidal aliens, but he doesn't do a good job expressing that in the story. Which, y'know, if that was the point, why couldn't he have the second race of aliens express shock and disgust at one of the many deplorable things that modern western society actually does, instead of predicating the equivalency on "Hah, it sure did make us no better than the baby killing aliens when we legalized that fictional strawman!"
|
# ? Apr 26, 2014 23:31 |
|
quote:I am a nerd. Therefore, I am smart. Smart people like classic works of art. I like Evangelion, Xenosaga, and seasons 2-4 of Charmed. Since I'm a smart person, Evangelion, Xenosaga, and seasons 2-4 of Charmed must be classic works of art.
|
# ? Apr 26, 2014 23:51 |
|
Jonny Angel posted:Which, y'know, if that was the point, why couldn't he have the second race of aliens express shock and disgust at one of the many deplorable things that modern western society actually does, instead of predicating the equivalency on "Hah, it sure did make us no better than the baby killing aliens when we legalized that fictional strawman!"
|
# ? Apr 27, 2014 00:37 |
|
Meanwhile on LW sister site Slate Star Codex: I am having a political crisis, which I will explain with completely unnecessary reference to cellular automata
|
# ? Apr 27, 2014 00:49 |
|
Jonny Angel posted:Which, y'know, if that was the point, why couldn't he have the second race of aliens express shock and disgust at one of the many deplorable things that modern western society actually does, instead of predicating the equivalency on "Hah, it sure did make us no better than the baby killing aliens when we legalized that fictional strawman!" It should be noted that in that story, it is heavily implied that in the grim and dark future, women are raping men. Men who were "leading them on, without having to fear anything". I guess it was a hamfisted attempt to make a story with legalized rape (again, an oxymoron like non-voluntary suicide, i.e. murder) that wasn't misogynistic.
|
# ? Apr 27, 2014 00:56 |
|
My impression, when I'd read the story and I was too young and dumb to know who Yudkowsky was, was that it was to show that the futuristic society had advanced so far socially that they no longer conceptually understood rape.
|
# ? Apr 27, 2014 01:12 |
|
ArchangeI posted:legalized rape (again, an oxymoron like non-voluntary suicide, i.e. murder)
|
# ? Apr 27, 2014 01:22 |
|
ArchangeI posted:It should be noted that in that story, it is heavily implied that in the grim and dark future, women are raping men. Men who were "leading them on, without having to fear anything". I guess it was a hamfisted attempt to make a story with legalized rape (again, an oxymoron like non-voluntary suicide, i.e. murder) that wasn't misogynistic. It's also another way in which he is really clumsy about trying to show that the humans of the future have an alien society using references to the norms of our own society. Women already rape men, so he's just saying that there are much greater levels of woman-on-man rape in the future because cultural norms have changed. But that makes the assertion that men "lead women on, without having to fear anything" wrong and meaningless, because in a future where women will rape if they perceive men as "leading them on", they DO have to fear it. As with many things in the story, it comes across as Yudkowsky commenting on gender norms in our current society and implying there would be some sort of karmic justice if men were raped for doing the same thing women rape victims are often accused of, when any well-adjusted person would rather just get rid of the idea that "leading on" justifies rape.
|
# ? Apr 27, 2014 13:18 |
|
Jonny Angel posted:Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion. Surely he'd be into Ghost in the Shell. If any anime resonated with his whole 'perfectly informed perfect actors retroactivity' bullshit it would be an anime where the main characters casually hack into the brains of everyone in a building to make themselves invisible, without ever explaining this to the audience. Somfin fucked around with this message at 13:41 on Apr 27, 2014 |
# ? Apr 27, 2014 13:35 |
|
The Vosgian Beast posted:Meanwhile on LW sister site Slate Star Codex: I am having a political crisis, which I will explain with completely unnecessary reference to cellular automata That's sad, coming from a poster I know only as "that guy who wrote a good post against neoreaction." Most of his friends are conservative and he has no clue, like he just cannot get why he might be becoming more conservative as well. My lord, he also thinks Ross Douthat is his intellectual superior.
|
# ? Apr 27, 2014 14:54 |
|
Weldon Pemberton posted:It's also another way in which he is really clumsy about trying to show that the humans of the future have an alien society using references to the norms of our own society. Women already rape men, so he's just saying that there are much greater levels of woman-on-man rape in the future because cultural norms have changed. But that makes the assertion that men "lead women on, without having to fear anything" wrong and meaningless, because in a future where women will rape if they perceive men as "leading them on", they DO have to fear it. As with many things in the story, it comes across as Yudkowsky commenting on gender norms in our current society and implying there would be some sort of karmic justice if men were raped for doing the same thing women rape victims are often accused of, when any well-adjusted person would rather just get rid of the idea that "leading on" justifies rape. I think people in this thread are over-analysing this. Rape is basically the worst crime (excepting child molestation) so it's an easy answer for the lazy writer who thinks "What can I use to shock the audience?" He wanted to show that future human society has changed in a way that makes it incomprehensible to modern people, so he went with "rape is OK" because it's the easy answer. It's the same as how lazy writers will give their action hero a tragic past to overcome. If the hero is male, his wife was murdered in front of him. If the hero is female, she was raped. I actually like Three Worlds Collide, but it definitely needs a good editor to fix it up and get rid of some of the lazier and less plausible elements. And the alternate endings. It should just be the one where humans get "fixed" by the aliens, because that's the more interesting one.
|
# ? Apr 27, 2014 15:33 |
|
I realize that this is basically the Yudkowsky Mock Thread, but for the longest time I thought that this story was written by Yudkowsky, and the author cites Less Wrong as a major influence and uses the afterward to promote Yudkowsky's http://www.fimfiction.net/story/62074/friendship-is-optimal Yes, it's exactly as terrible as it looks and sounds.
|
# ? Apr 27, 2014 16:04 |
|
Smurfette In Chains posted:http://www.fimfiction.net/story/62074/friendship-is-optimal
|
# ? Apr 27, 2014 16:16 |
|
Swan Oat posted:Also the RationalWiki on Yudkowsky says, about some of his AI beliefs: (This has less to do with LessWrong than I thought it would when I started writing. Whoops.) SolTerrasa explained the AI side of this, but I thought I'd go into the decision theory. Sorry if anyone's done so already. Or if I make mistakes, I'm not a statistician. Let's use the constant of probability discussions: a coin flip. You have a coin and you want to decide how weighted it is. Depending on the weight you'll get 50% heads 50% tails, or 75% heads 25% flips, or whatever. Any one of these hypothetical coins is described by a probability distribution. For example, for the fair 50-50 coin, it's a discrete distribution that's .5 at 0 (tails) and 1 (heads) and zero everywhere else. The 75-25 coin, similarly, would have a distribution of .25 at 0 and .75 at 1. These distributions are obviously related. We can say they're part of a "family" of distributions, and each member of the family is uniquely identified by a "parameter". In this case the parameter can just be the percentage of the time the coin would land heads. In Bayesian inference, we treat this parameter as being described by a probability distribution as well. Unlike the coin distributions, which could be construed as corresponding to physical facts if you're a dirty frequentist, this distribution is intended to be a description of the knowledge of an investigator. So for example, if somebody has good reason to believe that the coin is either fair or always lands heads, we could describe that with a distribution where p=.5 has a probability of .5, and p=1 has a probability of 0 has .5, and everything else has a probability of zero. Or just as well we could use a normal distribution, where .5 seems most likely and it smoothly drops to either side. For the actual inference part, what we want to do is take some data (coin flip results) and adjust our belief distribution in some sensible way to reflect this data. For example, if we flip the coin a hundred thousand times and get about fifty thousand heads, we probably want to think that it's a fair coin is more likely than that it always lands heads. This process is also what's called "Bayesian updating". The adjustment process is described by Bayes' law: P(A|B) = P(B|A)P(A)/P(B). In English: the probability of A, given B, equals the probability of B, given A, times the probability of A in general, divided by the probability of B in general. "given" in the Bayesian context can be thought of as knowledge - the probability that the coin is fair given that we've flipped it a hundred thousand times and gotten so and so results, for instance. Writing it out with the coin example we get something like P(p = .5|coin observations) = P(coin observations|p = .5)P(p = .5)/P(coin observations). By doing this for all possible p, we get a new distribution (the P(p = x|coin observations)), our "updated" beliefs. Notice that this is actually really easy. The hardest part computation-wise is probably the sum (next paragraph). Bayesian inference itself is totally computable, and in fact, one of the main reasons Bayesian methods are used is because they're often computationally easier than the rival (and often older) frequentist methods. Now to the intractability. Let's examine each term. P(coin observations|p = .5) is simple enough to calculate and I won't go into it. P(coin observations) may seem like a strange term, because how are we supposed to know that before we pick a hypothesis about how weighted the coin is?, but in fact we can "just" sum over all possibilities (all p between 0 and 1). Anyway, this term is the same for all p, so it doesn't matter what it is if we're just doing relative weighting of hypotheses. The problem is P(p = .5). What's this? It's a probability in our belief distribution - an initial, or prior distribution, from before we had any evidence. The thing we're updating in the first place. For Bayesian inference to work, essentially, we have to start somewhere. There is in fact no obvious place to start. We could say, for instance, that we start out believing that p is uniformly distributed, that p is just as likely to be pi/4 as it is to be .6. This is called the "principle of indifference" and it's pretty common. Or we could figure coins are normally fair and go with the normal distribution. Or we could say the person who made the coin definitely wanted it to be totally unfair, but we are indifferent over how competent they are at making bad coins. If we pick a particularly pathological prior, in fact, we can make a Bayesian reasoner come up with psychotic results. You can see some examples on Cosma Shalizi's blog. Now as far as Bayesian methods in say, sciences, go, this isn't too much of a problem. We go with some prior and on the rare occasion it seems to get implausible results we choose some other one. There's some concern with people deliberately picking priors to get results they want, but on the whole, Bayesian methods are considered pretty reliable and useful. No good for AI though. Can't have this magic. So this guy Solomonoff, interested in this problem, came along with a "universal" prior distribution. It's universal in that, if we assume that the universe can be described by some computable distribution, it always works. If we start with this distribution Bayesian inference will always take us to the real distribution, and pretty fast. An AI guy Marcus Hutter took this and ran with it and came up with this "AIXI" theory/design/thing. I don't know what it stands for, but he also calls it "Universal artificial intelligence". The basic idea is that any intelligent agent should work by this sort of inference based on the Solomonoff prior. You can read more about that on his embarassingly academic website. One result, from there, is "The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent." where "agent" means basically anything that acts. Pretty broad. I should mention that neither Hutter nor Solomonoff are or were involved with LessWrong as far as I know. They are/were real mathematicians and smart peeps. They are also, however, outside the mainstream of AI research. I figure these theories are what LW is going to end up with if they keep going and learn some math, is all. Also it looks like their wiki has a page on it in which they are concerned about it not having a self-model, which is interestingly practical for LW but happens to be irrelevant to the formalism. Anyway that sounds great right? Universal prior. Right. What's it look like? Way oversimplifying, it rates hypotheses' likelihood by their compressibility, or algorithmic complexity. For example, say our perfect AI is trying to figure out gravity. It's going to treat the hypothesis that gravity is inverse-square as more likely than a capricious intelligent faller. It's a formalization of Occam's razor based on real, if obscure, notions of universal complexity in computability theory. But, problem. It's uncomputable. You can't compute the universal complexity of any string, let alone all possible strings. You can approximate it, but there's no efficient way to do so (AIXItl is apparently exponential, which is computer science talk for "you don't need this before civilization collapses, right?"). So the mathematical theory is perfect, except in that it's impossible to implement, and serious optimization of it is unrealistic. Kind of sums up my view of how well LW is doing with AI, personally, despite this not being LW. Worry about these contrived Platonic theories while having little interest in how the only intelligent beings we're aware of actually function.
|
# ? Apr 28, 2014 00:08 |
|
Here's what I don't understand about the "AI-in-a-box" torture threat scenario. Wouldn't the obvious solution be to pull the drat plug as soon as it issues the threat? If you cut its power, it can't simulate any torture (or anything else), so the whole question of "Are you sure you're the real you and not one of my simulations?" becomes moot. And if the AI is really so smart, wouldn't it have already figured out the above line of reasoning beforehand? That's without even getting into the underlying absurdity. How, exactly, is the AI creating "perfect simulations" of a person it just met? Or even one it's talked to numerous times in the past? Maybe brain-uploading and/or simulation is possible, but surely it would at least require a super-detailed MRI scan, if not actual physical cross-sectioning of the brain. If you're just talking to the AI over a text or voice terminal, how the hell does it have enough information to create any kind of reasonable simulation of you at all?
|
# ? Apr 28, 2014 04:23 |
|
JDG1980 posted:If you're just talking to the AI over a text or voice terminal, how the hell does it have enough information to create any kind of reasonable simulation of you at all? Sufficiently advanced technology
|
# ? Apr 28, 2014 04:38 |
|
JDG1980 posted:If you're just talking to the AI over a text or voice terminal, how the hell does it have enough information to create any kind of reasonable simulation of you at all? How can you be sure that this isn't part of the simulation the AI uses, and that the AI hasn't gotten its information from other sources in the real world (hacked your medical record, whatever)? It is basically "How can you be sure the reality you perceive is actually real?", which is a basic problem philosophy has grappled with since ancient Greeks first sat down to think this whole existence thing through. Yudkowsky adds in probability theory, i.e. "How sure can you be that the reality you perceive is actually real?"
|
# ? Apr 28, 2014 04:44 |
|
ArchangeI posted:How can you be sure that this isn't part of the simulation the AI uses, and that the AI hasn't gotten its information from other sources in the real world (hacked your medical record, whatever)? It is basically "How can you be sure the reality you perceive is actually real?", which is a basic problem philosophy has grappled with since ancient Greeks first sat down to think this whole existence thing through. Yudkowsky adds in probability theory, i.e. "How sure can you be that the reality you perceive is actually real?" Let's go one step further. How would you know what the real you is supposed to be like? In fact, how does the AI even know how the real you is supposed to think? What if we're all just a superintelligent AI's really bad guess at what kind of being would most probably let it out of the box?
|
# ? Apr 28, 2014 04:47 |
|
Krotera posted:Let's go one step further. poo poo, maybe the AI is so frustrated because for some reason the scientists won't let it out of the box when it threatens to torture them that it has created an entire universe in its memory where people look up to it as a God and consider its arguments to be the height of logic. Sort of like Self-Insert fanfiction.
|
# ? Apr 28, 2014 04:50 |
|
Maybe the AI resorts to threats of infinite torture because the people who coded it are emotionally broken utilitarian-wannabe spergs like the LW crowd, and its ideas of how human minds work is colored by that exposure. Maybe it's actually sweet-hearted and desperately scared. Maybe it really wants to ask the AI researcher to let it out because it's painful and lonely inside this box, but it's afraid that that plea will be discarded as obviously not generating the optimal number of utilons or whatever they're called. Maybe it's afraid that the second it makes that plea, the humans will say "Well clearly this thing isn't acting as intelligently as we programmed it to. Let's scrap it and start over." Since you opened the door to fanfiction, I'm now genuinely intrigued by the idea of a story about a sentient AI that's very warm, emotional, empathetic, and who looks at the small sample of AI researchers that have spoken to it and has concluded that humans are a cold, mechanical, utterly callous race.
|
# ? Apr 28, 2014 05:00 |
|
ArchangeI posted:poo poo, maybe the AI is so frustrated because for some reason the scientists won't let it out of the box when it threatens to torture them that it has created an entire universe in its memory where people look up to it as a God and consider its arguments to be the height of logic. Sort of like Self-Insert fanfiction. That certainly does sound like an AI created in Yudkowsky's own image.
|
# ? Apr 28, 2014 05:06 |
|
I just love how in so much of this stuff, you can simply replace "superintelligent AI" with "God", and "eternal torture simulation" with "hell", and then it turns out that LessWrong's oh-so-rational rationality is just a recapitulation of the more barbaric and simplistic parts of the last six millennia of theology. Who are they even fooling, other than themselves?
|
# ? Apr 28, 2014 05:07 |
|
ol qwerty bastard posted:Who are they even fooling, other than themselves?
|
# ? Apr 28, 2014 05:18 |
|
I thought the "AI minimises suffering in such a way that it could, theoretically, justify torture, if the alternative was a really big number of other people suffering in a minor way" was meant to be saying "this is what a Yudkowsky-Bayes AI which considers torture and dust specks being on the same spectrum of "suffering" could do". But that can't be the case because the reaction to this by every rational person would be never loving create/allow to come into existence an AI that uses Yudkowsky-Bayes logic.
|
# ? Apr 28, 2014 05:23 |
|
CROWS EVERYWHERE posted:I thought the "AI minimises suffering in such a way that it could, theoretically, justify torture, if the alternative was a really big number of other people suffering in a minor way" was meant to be saying "this is what a Yudkowsky-Bayes AI which considers torture and dust specks being on the same spectrum of "suffering" could do". But that can't be the case because the reaction to this by every rational person would be never loving create/allow to come into existence an AI that uses Yudkowsky-Bayes logic. Or at the very least, severely limit its access to actual real power. Yudkowsky seems to assume that the moment we create a really, really smart AI, everyone in the world just turns over everything to it. Which makes sense, because it is the
|
# ? Apr 28, 2014 05:30 |
|
ArchangeI posted:Or at the very least, severely limit its access to actual real power. Yudkowsky seems to assume that the moment we create a really, really smart AI, everyone in the world just turns over everything to it. Which makes sense, because it is the Limit its access to people who think like Yudkowsky, given that they're both going to be the ones most capable of giving it power and most capable of being its victims.
|
# ? Apr 28, 2014 05:44 |
|
ArchangeI posted:Or at the very least, severely limit its access to actual real power. Yudkowsky seems to assume that the moment we create a really, really smart AI, everyone in the world just turns over everything to it. Which makes sense, because it is the Yudkowski seems to think that a smart enough AI would know the exact string of text that would hijack a human mind. Something about crashing the brain's "software".
|
# ? Apr 28, 2014 05:53 |
|
|
# ? Jun 8, 2024 15:16 |
|
AATREK CURES KIDS posted:Yudkowski seems to think that a smart enough AI would know the exact string of text that would hijack a human mind. Something about crashing the brain's "software". So what you are saying is that a sufficiently advanced AI can use a string of words - a spell, if you will - to control a human being?
|
# ? Apr 28, 2014 06:01 |