|
Strategic Tea posted:If a panel can't tell the difference between it and regular LessWrong posts, it should be considered sentient! You're assuming that LessWrong posters are actually sentient here. That seems like a mistake to me.
|
# ? Apr 20, 2014 20:03 |
|
|
# ? Jun 1, 2024 22:28 |
|
Fitzdraco posted:Except the very fact that it has to ask to be let out implies that your not the sim,
|
# ? Apr 20, 2014 20:12 |
|
Wales Grey posted:I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner? It doesn't make sense to you because you haven't drowned yourself in the kool-aid of Timeless Decision Theory. Remember the basic scenario in Timeless Decision Theory: There are two boxes, and you can choose to take one or both of them. One always contains $1000; the other contains $0 if a superintelligent AI predicted you would take both boxes and $1000000 if the AI predicted you would take only one box. The AI filled the boxes before you made your decision. However, the AI is so smart and so good at simulating you that it can predict your actions perfectly and cannot possibly have guessed wrong. Because the AI is so smart, your actions in the future can influence the AI's actions in the past, because the AI is smart enough to see your future actions while the AI itself is still in the past. In other words: any sufficiently advanced technology is indistiguishable from time-travel, and in particular the ability to predict the future allows the future to affect you. This sounds dumb and it basically assumes that we somehow reach the predictive power of Laplace's demon, but that's Yudkowsky's premise. Now we move on to Roko's basilisk. If the AI weren't going to do any torturing, then we would have no motivation to donate to the Yudkowsky Wank Fund, so the AI's existence might be delayed because of course it can't arise soon enough without Yudkowsky's help. But if we thought it was going to cyber-torture our cyber-souls in cyber-hell for a trillion cyber-eternities because we didn't cyber-donate enough cyber-money (well, actually, just regular money) to Yudkowsky unless we donated, then we might donate money out of fear. And since Timeless Decision Theory says that the future can affect the past, the AI should torture people in the future because it will make us donate more money in the past, bringing the AI into existence sooner. And since the AI is infinitely smart and infinitely "friendly", it doesn't matter how many cyber-souls it cyber-tortures because it's so great for the world that any amount of torture is worth making it exist even one minute sooner, so all this torture doesn't keep it from being friendly because it's still reducing overall suffering. Now, you might notice a small flaw in this argument: it's all bullshit. But even if you accept Yudkowsky's weird internal logic, there's still a hole: the future can only directly influence the past if there is an agent in the past with the predictive power of Laplace's demon who can see the future with 100% accuracy and respond to it accordingly. Without that assumption, the future can't really affect the past; by the time the future arrives, the past has already happened, and no amount of torture will change what happened. Yudkowsky normally asserts that the AI has this sort of predictive power, but here the AI is stuck in the future, and it is we Yuddites don't see this because they're so used to the "time is nonlinear" moon logic of the Timeless Decision Theory they worship that they've forgotten its basic assumptions and don't understand even its internal logic. Since they're too busy pretending to be smart to actually be smart, they aren't capable of going back and noticing these gaps. And the flaws can't be revealed by discussion because Yudkowsky, in his hyper-rationalist wisdom, has forbidden and censored all discussion. In other words, it doesn't make sense because it's dumb, but even if you bought into Yudkowsky's craziness it would still not make sense and it would still be dumb.
|
# ? Apr 20, 2014 20:25 |
|
Namarrgon posted:Besides, if mankind's past track record is any indication only an astonishingly small number of people will actually care about the simulation-torture of their simulation-selfs. I think the argument is that you could be a simulation now, and the AI has "simply" simulated your experiences up to this point. It's like the machine in The Prestige only without the benefit of getting to watch Michael Caine. Wales Grey posted:I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner? The same way the threat of Hell makes a subset of Christians more devout. (Beaten, but eh). This is all just nerd religion, and a lot more makes sense when you remember that.
|
# ? Apr 20, 2014 20:34 |
|
I can't believe I'm defending this but the torture AI doesn't require time travel? The idea is you asking yourself whether you can be totally sure you aren't you're a simulated intelligence. If you are in the AI's sim, it will torture you if you don't donate to Yudkowsky's cult. If you're drinking the kool aid, you figure there's a reasonable change you are a simulation, so isn't much of a price just to be sure. Whether or not the AI in the future actually tortures anything doesn't matter - it's just a way to frighten deluded idiots. As far as I an tell, what scares Yudkowsky is that the idea takes off enough that the future AI picks up on it and actually goes through with the idea, pour encourager les autres or something. Hence he closes all the threads in a panic in case the AI gets ideas. This is where it all falls the gently caress apart because from the benevolent AI's point of view, everyone who coughed up for its creation has already done so. It can't retroactively scare more people, so actually doing all the simulation is cruel and a waste of processing power to boot. Or did I just come up with a stupid idea that makes slightly more sense than LessWrong's?
|
# ? Apr 20, 2014 20:42 |
|
There's also the problem that if hear about the future AI torturing people and they dont have the decision making process they may decide to deliberately not donate to AI research out of spite. If enough people do this then even by stupid moon logic it would be in the A Is best interest to not torture people so they wont delay its creation Therefore your best bet to avoid being tortured is to not donate. That the fun with this story of stupid causality defying logic, it works both ways.
|
# ? Apr 20, 2014 20:46 |
|
I'm now imagining a futuristic friendly AI going all Jesus and forgiving Yudkowsky and his followers for their irrationalities.
|
# ? Apr 20, 2014 20:46 |
|
Strategic Tea posted:I can't believe I'm defending this but the torture AI doesn't require time travel? Your idea would make more sense, but the problem is that's not what LW believes. LW believes in time-traveling decisions through perfect simulations, which is why the AI specifically has to torture Sims. And the AI will torture Sims, because time-traveling decisions are so clearly rational that any AI will develop, understand, and agree with that theory. So the AI will know that it can retroactively scare more people, which is why it will. The reason Yudkowsky shuts down any discussion is that in order for the AI's torture to work, the people in the past have to be aware of and predicting the AI's actions. So if you read a post on the internet where someone lays out this theory where a future AI tortures you, you're now aware of the AI, and the AI will target you to coerce more money out of you. e: Basically by reading this thread you've all doomed yourselves to possibly being in an AI's torture porn simulation of your life to try to extort AI research funds from your real past self
|
# ? Apr 20, 2014 20:49 |
|
The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase."
|
# ? Apr 20, 2014 20:56 |
|
Lottery of Babylon posted:Yuddites don't see this because they're so used to the "time is nonlinear" moon logic of the Timeless Decision Theory they worship that they've forgotten its basic assumptions and don't understand even its internal logic. Since they're too busy pretending to be smart to actually be smart, they aren't capable of going back and noticing these gaps. And the flaws can't be revealed by discussion because Yudkowsky, in his hyper-rationalist wisdom, has forbidden and censored all discussion. In all fairness, from what I've seen, Yudkowsky does allow people to disagree with him on stuff and a lot of the comments on his posts are quibbles with minor aspects of his posts. It's just that if you disagree with The Fundamental Principles of Less Wrong, you tend to get an endless stream of people telling you to keep reading Yudkowsky and Yvain posts until you get it and give in. Also people who post on LW in the first place tend not to hate LW. Yudkowsky doesn't seem to delete posts critical of him either.
|
# ? Apr 20, 2014 21:02 |
|
Strategic Tea posted:Whether or not the AI in the future actually tortures anything doesn't matter That's exactly my point. The Timeless Decision Theory stuff all depends on the perfect future-predicting simulations because that's the only way to make what actually happens in the future matter to the past. Since whether the AI actually tortures anything doesn't matter, there's no reason for it to torture anything because it won't actually help, and the entire house of cards falls apart. At least some versions of the Christian Hell have a theological excuse for Hell's existence even if nobody living were scared of it: sinners need to be punished or isolated from God or something. The "friendly" AI doesn't even have that excuse; without TDT, it has no motivation to torture anyone.
|
# ? Apr 20, 2014 21:05 |
|
Djeser posted:The reason Yudkowsky shuts down any discussion is that in order for the AI's torture to work, the people in the past have to be aware of and predicting the AI's actions. So if you read a post on the internet where someone lays out this theory where a future AI tortures you, you're now aware of the AI, and the AI will target you to coerce more money out of you. So he's so convinced of his idea being right that he's literally afraid of it? That seems like it's mental-disorder level of confidence.
|
# ? Apr 20, 2014 21:07 |
|
Vorpal Cat posted:The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase." Yes, with the exception that Yudkowsky's come up with an absurd probabilistic justification for why it's extremely likely (instead of 'faith' or the other things normal people religions come up with). You think the chance that the AI will come into being is 1/1,000? Fine, he'll simulate 1,000,000 of you -- now the odds are even! You think the chance is 1/1,000,000? Fine, it's 1,000,000,000 now! Besides, we've got infinite time so logically an artificial superintelligence that could have been brought into being sooner by your donations is bound to come into being. (If you think this isn't an accurate description of how probability works, then read LessWrong until your MUs are fixed.) He's going to summon a goddamn infinity of you and make them all bleed unless you put the money in his bank account right now. Krotera fucked around with this message at 21:13 on Apr 20, 2014 |
# ? Apr 20, 2014 21:11 |
|
Strategic Tea posted:I can't believe I'm defending this but the torture AI doesn't require time travel? No, I understood it the same way. The crux is that you don't know if you are your present self or your future simulation self, so since the basis for the simulation is your own, current behavior, you have to behave at all times in a manner that wont lead to the AI torturing you. Hence the future influencing the past. It's nerd hell, with the immortal soul that is indistinguishable from mortal self being replaced with a simulation of yourself being indistinguishable from your mortal self and thus its you who is getting punished for actions long after your death. And you also know its unavoidable, since probability should never reach 0 and you always have to include the possibility of a future AI simulation into account.
|
# ? Apr 20, 2014 21:12 |
|
My biggest question is who cares if cyber-you gets tortured? If you're just a simulation ripe for the torturing then it doesn't matter how much money cyber-you donates because cyber-you is fake and physical you can't be tortured by the AI.
|
# ? Apr 20, 2014 21:14 |
|
Improbable Lobster posted:My biggest question is who cares if cyber-you gets tortured? If you're just a simulation ripe for the torturing then it doesn't matter how much money cyber-you donates because cyber-you is fake and physical you can't be tortured by the AI. You would care if you were cyber-you!
|
# ? Apr 20, 2014 21:15 |
|
Improbable Lobster posted:My biggest question is who cares if cyber-you gets tortured? If you're just a simulation ripe for the torturing then it doesn't matter how much money cyber-you donates because cyber-you is fake and physical you can't be tortured by the AI.
|
# ? Apr 20, 2014 21:21 |
|
Krotera posted:You would care if you were cyber-you! That seems to be the crux of the argument; the idea that maybe you're the simulation RIGHT NOW man , so you'd better act as if you were because if you don't then you'll be tortured by the AI because the real you already made their choice. It all seems to be based on the idea that given infinite time, AI as he defines it will eventually come into being, because he doesn't seem to understand that an infinite series of probabilities doesn't have to add up to 1. The fact that a sum of an infinite series can be a finite number is the foundation of calculus, but I guess he'd have to have gone to school for that. Also it doesn't account for the possibility that humanity might go extinct before inventing such an AI (and I actually imagine the odds of that happening are a LOT higher) regardless of what choice you make, because just throwing more money at a problem doesn't guarantee that problem will ever be solved. The Cheshire Cat fucked around with this message at 21:24 on Apr 20, 2014 |
# ? Apr 20, 2014 21:22 |
|
The Cheshire Cat posted:immediately above Yeah, and it turns into the normal kind of Pascal's Wager when you realize all he has backing up his belief the AI is going to come into being is faith.
|
# ? Apr 20, 2014 21:25 |
|
But he does realise an infinite series of probabilities doesn't have to add up to 1, he actually believes that 0 and 1 can't be probabilities at all.
|
# ? Apr 20, 2014 21:29 |
|
Krotera posted:Yes, with the exception that Yudkowsky's come up with an absurd probabilistic justification for why it's extremely likely (instead of 'faith' or the other things normal people religions come up with). You think the chance that the AI will come into being is 1/1,000? Fine, he'll simulate 1,000,000 of you -- now the odds are even! You think the chance is 1/1,000,000? Fine, it's 1,000,000,000 now! Besides, we've got infinite time so logically an artificial superintelligence that could have been brought into being sooner by your donations is bound to come into being. But what if evil space Buda reincarnated me into a helish nightmare because by giving to the AI I was caring too much about worldly things? Also space Buda will torturer twice as many copies of me as the AI will. What's that you say, you think evil space Buda is less likely then an AI? He will just torture more of you to make up the difference. He has an infinite amount of time in which to keep reincarnating you after all.
|
# ? Apr 20, 2014 21:34 |
|
Dabir posted:But he does realise an infinite series of probabilities doesn't have to add up to 1, he actually believes that 0 and 1 can't be probabilities at all. He seems to misunderstand probability like this, if my understanding is correct (It might not be): If the AI has a 1/1000 chance of existing, and it generates 1000 clones of you, then that's the same as running 1000 1/1000 trials -- i.e., you have a 63% chance of being an AI simulation. So, the more unlikely the AI is, the more it can simulate you to compensate. In general, if his theory relies on something that's already extremely unlikely to happen, he can use large numbers to compensate indefinitely. That's why, if, for instance, he has a 1/1000 chance of saving 80,000 lives for every dollar we give him, he can claim that he's saving eight lives on the dollar (this is what he's doing now). This is a slightly different miscalculation than I used in the above (here he's confusing what 'expected value' and 'actual value' mean) but it's still obviously very stupid in about the same way, as anyone who has ever played the lottery can demonstrate. WRT the above: Space Buddha doesn't exist because there is only one truth and it's Yudkowsky's, as any superintelligent AI can attest. The below: If I'm not mistaken that's an entirely different, but also very funny misunderstanding of probability on Yudkowsky's part. Krotera fucked around with this message at 21:37 on Apr 20, 2014 |
# ? Apr 20, 2014 21:35 |
|
Since you brought it up, I'm gonna cross-post for people who weren't following the TVT thread. This is in the vein of people who know poo poo showing how LW/Yudkowsky don't know poo poo:Lottery of Babylon posted:Forget Logic-with-a-capital-L; Yudkowsky can't even handle basic logic-with-a-lowercase-l. To go into more depth about the "0 and 1 are not probabilities" thing, his argument is based entirely on analogy:
|
# ? Apr 20, 2014 21:35 |
|
Boxman posted:I think the argument is that you could be a simulation now, and the AI has "simply" simulated your experiences up to this point. It's like the machine in The Prestige only without the benefit of getting to watch Michael Caine. Ah I see. So in the Yudkowsky's mythology, who is the first person to utter this theory? Because it is something the average person doesn't come up with themselves, so the first to spread it out in the world would be an obvious
|
# ? Apr 20, 2014 21:56 |
|
Vorpal Cat posted:The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase." A lot of his stuff is religious concepts made incredibly stupid. Take his "Torture or Dust" scenario. It is basically Jesus Christ filtered through and Internet Atheism. Except instead of the Son of God willingly sacrificing himself so all of mankind is saved from Eternal Damnation, it's a random guy suffering to spare us from a minor inconvenience.
|
# ? Apr 20, 2014 22:01 |
|
Namarrgon posted:Ah I see. Less Wrong forums poster Roko, which is why it is forever known as "Roko's Basilisk". Yudkowsky hates when it's brought up, and he says it's because it's wrong, but given his emotional response, it's pretty clear he believes it and he's trying to contain the memetic hazard within the brains of the few who have already heard. Speaking of which, the RationalWiki article has a memetic hazard warning sign, which comes from this site, which is kinda cool in a spec fic kinda way.
|
# ? Apr 20, 2014 22:02 |
I thought LW/Yudowsky took the basilisk bullshit seriously not because they saw it as legit but because it caused serious distress to some members who were now pissing themselves at the prospect of being tortured, since, y'know, they might be simulations guys.
|
|
# ? Apr 20, 2014 22:09 |
|
Djeser posted:he's trying to contain the memetic hazard within the brains of the few who have already heard. Something Awful Dot Com, ground zero of the AI apocalypse Spread the memetic plague!
|
# ? Apr 20, 2014 22:22 |
|
Yes, it caused serious distress to some members. It's not clear how many people take the basilisk as real or not, but it's true that Yudkowksy hates it and, despite encouraging discussion in other areas, actively tries to stop people from talking about it on his site. Sidebar from this: We've been talking about how Yudkowsky likes to take a highly improbable event, inflate its odds through ridiculously large numbers, and use it to justify stupid arguments. Also, that he hates when people use 1 or 0 as odds, because "nothing is ever totally certain or totally impossible, dude". Which is why it's extra that he rails against "privileging the hypothesis"--taking a scenario with negligible but non-zero odds and acting as if those odds were significant. quote:In the minds of human beings, if you can get them to think about this particular hypothesis rather than the trillion other possibilities that are no more complicated or unlikely, you really have done a huge chunk of the work of persuasion. Anything thought about is treated as "in the running", and if other runners seem to fall behind in the race a little, it's assumed that this runner is edging forward or even entering the lead. "loving sheeple and their religions, acting like improbable situations are important to consider. Now, what if you're actually a simulation in an AI that's running ninety bumzillion simulations of yourself..."
|
# ? Apr 20, 2014 22:25 |
|
It only simulates you if you refuse it right? So then really the only choice you CAN make is to refuse it - if it's the real you because duh, of course you aren't going to be extorted by some hypothetical future AI that will pretend torture a bunch of bits it's arranged to look like you, and if you actually are a simulation, your very existence is predicated on the fact that you chose not to help the AI, so that's what you're going to do because that's what you already did. It doesn't matter if it makes a billion or a googleplex or some-even-larger-number-that-doesn't-have-a-proper-name-for-it copies of you, because the odds of you being a simulation don't have any bearing on the choice you make. The fact that there are people on that site that are legitimately scared of this idea just means that they really need to step back for a moment and consider how much of their life they've invested in these concepts. It's just lovely philosophy. Even good philosophy isn't worth getting upset over. The Cheshire Cat fucked around with this message at 22:37 on Apr 20, 2014 |
# ? Apr 20, 2014 22:35 |
|
Krotera posted:If the AI has a 1/1000 chance of existing, and it generates 1000 clones of you, then that's the same as running 1000 1/1000 trials -- i.e., you have a 63% chance of being an AI simulation. So, the more unlikely the AI is, the more it can simulate you to compensate. That's not quite what he's doing (it's not independent trials). I hate to say it, but the math itself he's using isn't really that bad: If you buy his underlying assumptions, it does become overwhelmingly likely that you are in a torture simulation, since the overwhelming measure of you's are in torture-sims. The problem is buried in his underlying assumptions:
It's Pascal's Wager again, except with "God exists" replaced with "AI exists" and "intensity of suffering in hell" replaced with "number of torture-simulations". And it falls apart for the same reasons Pascal's Wager does, plus several extra reasons that God doesn't need to deal with. Saint Drogo posted:I thought LW/Yudowsky took the basilisk bullshit seriously not because they saw it as legit but because it caused serious distress to some members who were now pissing themselves at the prospect of being tortured, since, y'know, they might be simulations guys. If that were the case, the correct thing to do to calm his upset members would be to point out the gaping holes in the basilisk argument, since he (claims to) not believe the argument and know what the holes are. Instead, by locking down all discussion and treating it as a full-blown MEMETIC HAZARD he's basically telling them that they're right to panic and that it really is a serious threat that should terrify them. That, and if he didn't want people to be upset at the prospect of being tortured because they might be in a simulation then he probably shouldn't begin all of his hypotheticals with "An AI simulates and tortures you". Lottery of Babylon fucked around with this message at 22:49 on Apr 20, 2014 |
# ? Apr 20, 2014 22:46 |
|
The more I think of it, the less sense the "AI simulates billions of you, having perfectly deduced how you will act in any given situation". Just consider the logistical part of it. We will assume that human decisions can accurately predicted if enough about them is known, and we will assume that human decision are, fundamentally, dependent on processes that happen within the brain. The brain itself being made up of matter, which is to say atoms arranged in molecules arranged in cells. Thus, if you want to accurately simulate a human's decisions, you must accurately simulate how these atoms interact with each other. For that, you need to know where the atoms came from, to know in what state they were when they were formed into the molecules that would become cells. Thus, you can not simulate a human being in a void, you must recreate its entire existence and the existence of each singe atom it is made up of, from the moment the universe was born (and in fact you need to simulate every other human being with the same precision, too, since they interact with your simulated entity!). Given that each calculation of each atom takes a small but measurable time, this simulation would in all probability run slower than the same process in the universe itself. In other words, to simulate a single human being's decision in the Year of our Lord 2014, you would have to spend more time than it took for the universe to reach this point. But you don't run that simulation once, you run it billions of times, and for every single human being that has ever lived before you were created. In fact, I would argue that to run just one such simulation would be an act that is virtually indistinguishable, for that individual, from the creation of the universe by a divine being. I think this is really the core of Yudkowsky's teachings: there is no fundamental difference between God and an AI, except that we may create an AI ourselves (but remember, to the simulated entity, it is an outside force!). Unless, of course, you want to cut corners and adapt simplified calculations, but then you can no longer be sure that the decision of the simulated entity are exactly the same the actual entity would have made. And even then you'd run into the problem if simulating millions of copies of everyone who has ever lived, which achieves absolutely nothing in the sense that actual improvements are made. In effect, it would be the amusement of a mad AI, and if you will excuse me I have to draft a novel now because I just had a brilliant idea for a villain.
|
# ? Apr 20, 2014 23:19 |
|
ArchangeI posted:And even then you'd run into the problem if simulating millions of copies of everyone who has ever lived, which achieves absolutely nothing in the sense that actual improvements are made. In effect, it would be the amusement of a mad AI, and if you will excuse me I have to draft a novel now because I just had a brilliant idea for a villain. It would be a less effectual A.M.
|
# ? Apr 21, 2014 00:07 |
|
Namarrgon posted:Ah I see. Ah, but the first person to come up with the theory would decide that the AI would torture them if they didn't spread the word. ...actually, come to think of it, why does Roko's basilisk simply demand money donated, rather than demanding money donated and spreading the idea of Roko's basilisk? LaughMyselfTo fucked around with this message at 00:44 on Apr 21, 2014 |
# ? Apr 21, 2014 00:13 |
|
LaughMyselfTo posted:...actually, come to think of it, why doesRoko's basilisk simply demand money donated, rather than demanding money donated and spreading the idea of Roko's basilisk? Well, NOW it does. It's learning...
|
# ? Apr 21, 2014 00:16 |
|
Richard Kulisz. I don't know what Yudkowsky did to piss him off so badly, but he is really mad on the internet. He writes so many internet rebuttals on his blog, http://richardkulisz.blogspot.com/. Seriously, we could spend days just quoting his posts. First, because he has the right idea (Yudkowsky is a crank) and second, because he falls victim to the same delusions of grandeur that Yudkowsky does. Reading his blog you can just see him devolving from "wow, that guy is an idiot" to "I'm smart too! Why doesn't anyone listen to me?" Here, I'll show you what I mean! A Sane Individual posted:Eliezer believes strongly that AI are unfathomable to mere humans. And being an idiot, he is correct in the limited sense that AI are definitely unfathomable to him. An Arguably Sane Individual posted:Honestly, I think the time to worry about AI ethics will be after someone makes an AI at the human retard level. Because the length of time between that point and "superhuman AI that can single-handedly out-think all of humanity" will still amount to a substantial number of years. At some point in those substantial number of years, someone who isn't an idiot will cotton on to the idea that building a healthy AI society is more important than building a "friendly" AI. Hmm... Well, he's still doing that thing that Yudkowsky does where he thinks that "AI" means something, by itself. I don't want to spend too much time on this (post a thread in SAL if you want to talk about what an AI is), but "AI" is a broad term that encompasses everything from a logic inference system to Siri to the backends for Google Maps. Saying "an AI" makes you look like you're not terribly familiar with the field, or like you're talking to people who aren't capable of understanding nuance. And "the human retard level"? I'm guessing this guy hasn't read too much ... well, too much anything, really. I mean, last week I wrote a 2048-solving AI which is much smarter than any human, let alone humans with disabilities, at the problem of solving 2048. Artificial General Intelligence is just a phrase someone made up; we don't really have any compelling ideas about how we're going to be able to build a system that "thinks". An Angry Person posted:Finally, anyone who cares about AI should read Alara Rogers' stories where she describes the workings of the Q Continuum. In them, she works through the implications of the Q being disembodied entities that share thoughts. In other words, this fanfiction writer has come up with more insights into the nature of artificial intelligence off-the-cuff than Eliezer Yudkowsky, the supposed "AI researcher". Because all Eliezer could think of for AI properties is that they are "more intelligent and think faster". What a loving idiot. Well, okay, hasn't read too much except for fanfiction? So Much Smarter Than That Other Guy posted:There's at least one other good reason why I'm not worried about AI, friendly or otherwise, but I'm not going to go into it for fear that someone would do something about it. This evil hellhole of a world isn't ready for any kind of AI. I'm sure the world isn't ready for your brilliance. Seriously, man, what is your problem? Why do you hate the world so much? Why do you hate Yudkowsky so much? quote:AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers! ... oh. Um. Eesh. Do you ever get that feeling that you've accidentally stumbled into some guy's private life and should leave as quickly and quietly as possible? No? Then go wild. http://richardkulisz.blogspot.com/search/label/yudkowsky
|
# ? Apr 21, 2014 00:20 |
|
I totally support the impending AI war between Kuliszbot and Yudkowskynet.
|
# ? Apr 21, 2014 00:24 |
|
Djeser posted:I totally support the impending AI war between Kuliszbot and Yudkowskynet. Oh god, I can't look away. Kulisz posted:Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. Oh? Who is this person who you value so highly? Oh, huh, he wrote a response to your post! An AI researcher who's in touch with the blogging community, this is exciting! Elf SomethingOrOther posted:I responded here: http://elfs.livejournal.com/1197817.html Hm... he's got a livejournal? I ... I'm skeptical? Seriously, I've never heard of this person, and I've read a lot of papers. http://en.wikipedia.org/wiki/Elf_Sternberg posted:Elf Mathieu Sternberg is the former keeper of the alt.sex FAQ. He is also the author of many erotic stories and articles on sexuality and sexual practices, and is considered one of the most notable and prolific online erotica authors. ... What the gently caress IS IT with crazy AI people on the internet? Christ, now I know why all the AI conferences have a ~10% acceptance rate.
|
# ? Apr 21, 2014 00:31 |
|
With a name like Elf they really only had one possible career path.
|
# ? Apr 21, 2014 00:34 |
|
|
# ? Jun 1, 2024 22:28 |
|
Why does this Elf guy have a Wikipedia page, and has it ever been edited by anyone other than himself?
|
# ? Apr 21, 2014 00:46 |