|
During school breaks I had a 26 hour sleep schedule because my body hates me and what eventually happens is that you stall out at around 5am and crash or you force yourself through and fall asleep in the middle of the day to reset back to a sort of normal schedule. Basically you're really tired a lot. Maybe the singularity fixes this clear flaw of human behavior.
|
# ? Apr 24, 2014 03:26 |
|
|
# ? Jun 2, 2024 00:15 |
|
I knew he was poly, which, whatever, different strokes. But "more advanced culture" and having a "slave" are just so cringeworthy.
|
# ? Apr 24, 2014 04:07 |
|
Squashy posted:Sooo...this guy is Ulillillia with a -slightly- above average IQ, basically. He just prefers Harry Potter to Knuckles and cons people into donating money rather than computer parts. Comparing Ulillillia to Yudkowsky is like comparing Ulillillia to Chris-Chan. Ulillillia seems to have shown at least a basic grasp of computer science in his programming attempts and analysis of video game bugs, along with an understanding of the limits of his knowledge. Yudkowsky's theories aren't just incorrect, they're not even consistent with each other. I know Dunning-Kruger gets overplayed on Something Awful, but Yudkowsky is the perfect intersection of Troper Dunning-Kruger and Bay startup/venture capital Dunning-Kruger.
|
# ? Apr 24, 2014 04:36 |
|
AlbieQuirky posted:I knew he was poly, which, whatever, different strokes. But "more advanced culture" and having a "slave" are just so cringeworthy. His mention of getting off on sadism certainly puts the infinite AI torture in a new light, that's for drat sure.
|
# ? Apr 24, 2014 04:52 |
|
Djeser posted:The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture. What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with. Asking a sim to donate doesn't change anything-either the AI already knows the sim's response, so if it goes through with the torture it's just being a dickbag. If I am a sim, and an accurate one, then if I won't donate there was nothing I could have done to the contrary and torture was inevitable. So why go through the formality of asking?
|
# ? Apr 24, 2014 04:58 |
|
Ugly In The Morning posted:His mention of getting off on sadism certainly puts the infinite AI torture in a new light, that's for drat sure. If you don't donate to his foundation, an AI will make a simulation of you and tickle it.
|
# ? Apr 24, 2014 04:58 |
|
It will program a simulation of you that inflates like a balloon, and also will look like a my little pony.
|
# ? Apr 24, 2014 05:39 |
|
I'm pretty sure the optimal situation for his AI torture routine is to not actually torture anyone but have about two-to-three-hundred years of dumbasses waiting in the ranks to donate because Roko's basilisk seemed convincing enough. This is optimal if you're an AI that somehow ends up created through these turns of events or if you're the thumbsitting owner of an AI research program that's promising to develop one sometime soon. It's like the threat of hell without all of the moral implications of actually sending people to hell!
|
# ? Apr 24, 2014 06:07 |
|
Simulations of you can't feel anything and are not possible in the first place. Particularly damning for the Timeless ridiculousness that a "rationalist" should accept: there's no reason to suppose that, given a state of the universe A, there is only one possible sequence of events that brings about state A.
|
# ? Apr 24, 2014 08:04 |
|
Lightanchor posted:Simulations of you can't feel anything and are not possible in the first place. Remember, there's no such thing as a possibility of 1 or 0 in Yudlowskiland, so 'possible but exceedingly unlikely' is the most we can say. Which is to say, it's impossible to describe the possibility of (a && ~a) or (a given a), but whatever.
|
# ? Apr 24, 2014 08:22 |
|
PsychoInternetHawk posted:What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with. Perhaps the AI could create an AI that would always go through with the torture, even when it's not in the AI's interest, to create a credible threat. For more information on this topic please see Dr. Strangelove.
|
# ? Apr 24, 2014 08:36 |
|
Swan Oat posted:30 hour sleep cycle?? Is he going to sleep thirty hours per day or what? The Harry Potter of MoR has a natural 30-hour wake-sleep cycle and is completely incapable of functioning on a normal 24-hour schedule. As far as I can tell there is no narrative reason for this, other than that it makes him extra-special.
|
# ? Apr 24, 2014 09:33 |
|
Also it makes him into Old Snake. Which I thought was the kind irrational impossible thing Yudkowsky hated in the original books, but sacrifice of war
|
# ? Apr 24, 2014 10:04 |
|
AATREK CURES KIDS posted:Perhaps the AI could create an AI that would always go through with the torture, even when it's not in the AI's interest, to create a credible threat. For more information on this topic please see Dr. Strangelove. Yeah, but if it's a simulation, then whether or not it goes through with it isn't going to change the credibility of the threat. THere's no way to verify if a simulation is actually being tortured, or if the AI is just claiming it's torturing a bunch of them. It's quite a bit different from being able to see missile silos and troop movements. The intangibility really fucks with the credibility. Also, this guy was pretty much hosed the day his parents named him "Eliezer". The only career path for a name like that is "insufferable turbonerd".
|
# ? Apr 24, 2014 12:47 |
|
He could be torturing people right now if only his parents had named him Elaizer.
|
# ? Apr 24, 2014 12:52 |
|
PsychoInternetHawk posted:What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with. I bolded that part because Timeless Decision Theory argues that future actions can and do affect the past. Not in the sense of "well a prediction affects what you do" but literally saying "the events of the future have a direct impact on you in the present as long as you can predict them with perfect accuracy". It's nonintuitive and loving dumb, but in the impossible case where you've got a perfectly accurate prediction, it works. Djeser posted:Timeless Decision Theory works on the idea of predictions being able to influence events. For instance, you can predict (or, as Less Wrong prefers, simulate) your best friend's interests, which allows you to buy him a present he really likes. You, in the past, simulated your friend getting the present you picked out, so in a sense, the future event (your friend getting a present he likes) affected the past event (you getting him a present) because you were able to accurately predict his actions. Remember, they also believe that any rational AI will also agree with their theories on Bayesian probability and Timeless Decision Theory.
|
# ? Apr 24, 2014 13:18 |
|
Someone needs to pretend they're a rational AI and tell the Yuddites they're all sims who'll get tortured if they don't stop being smug douchebags, because their smug douchebaggery is jeopardising the creation of a rational AI in some vaguely-defined butterfly effect way.
|
# ? Apr 24, 2014 14:05 |
|
Doesn't this have the same basic flaw as Pascal’s Wager, where it assumes that Yudkowsky's 'charity' is the only one that could result in a magic super AI? What if this one goes under and 20, 30 years down the road there's some other AI research tank, and every cent given to Yudkowsky is actually slowing us down?
|
# ? Apr 24, 2014 14:22 |
|
YggiDee posted:Doesn't this have the same basic flaw as Pascal’s Wager, where it assumes that Yudkowsky's 'charity' is the only one that could result in a magic super AI? What if this one goes under and 20, 30 years down the road there's some other AI research tank, and every cent given to Yudkowsky is actually slowing us down? Don't be ridiculous. The premise where Yudkowsky fails at something he tries is clearly false.
|
# ? Apr 24, 2014 14:26 |
|
Can Super AI create a simulation that even it can't torture? Thats the real question.
|
# ? Apr 24, 2014 14:42 |
|
Ugly In The Morning posted:Also, this guy was pretty much hosed the day his parents named him "Eliezer". The only career path for a name like that is "insufferable turbonerd". They were probably hoping for "rabbi"
|
# ? Apr 24, 2014 16:13 |
|
To be fair, the one true prophet of the imminent robogod is kinda that, but for the sense of humor required.
|
# ? Apr 24, 2014 17:21 |
|
You know, sure it seems pretty bad that the super-AI is running gazillions of simulations of me being tortured, but that assumes super-AI is the only game in town. What if super-duper AI comes out later and to make amends for the cruelty of its predecessor runs bajillions of simulations of me engaging in fruitful and wholesome activities? What if in the deep future after all black holes have evaporated there are a few hojillion Boltzmann Brain simulations of me just kinda tooling about and not doing anything really bad or good? Taking all of that together the odds that I'm in a simulation are certainly astronomical, but the odds I'm in a torture simulation are pretty low. Seems to me like these guys aren't getting the big picture.
|
# ? Apr 24, 2014 18:18 |
|
YggiDee posted:Doesn't this have the same basic flaw as Pascal’s Wager, where it assumes that Yudkowsky's 'charity' is the only one that could result in a magic super AI? What if this one goes under and 20, 30 years down the road there's some other AI research tank, and every cent given to Yudkowsky is actually slowing us down? No, no, his charity isn't going to build the AI. It exists to tell the people building AIs to make friendly ones, and stop people creating unfriendly ones by mistake in some unspecified way.
|
# ? Apr 24, 2014 18:29 |
|
This guy has a slave and is probably INTJ. Seems pretty cool to me.
|
# ? Apr 24, 2014 19:15 |
|
Apparently Eliezer Yudkowsky contributed a couple of chapters to a volume titled 'Global Catastrophic Risks'. Turns out there's also a chapter in it with sections about 'the singularity and techno-millennialism', 'techno-apocalypticism' and 'symptoms of dysfunctional millennialism'
|
# ? Apr 24, 2014 19:53 |
|
Somebody tell me about "the sequences". I don't know what they are and gently caress going to the wiki.
|
# ? Apr 24, 2014 19:59 |
|
LordSaturn posted:Somebody tell me about "the sequences". I don't know what they are and gently caress going to the wiki. Yudkowsky's blog posts on Less Wrong. They are called sequences because many of them are grouped up by topic.
|
# ? Apr 24, 2014 20:03 |
|
The Vosgian Beast posted:Yudkowsky's blog posts on Less Wrong. They are called sequences because many of them are grouped up by topic. There's a long sequence on quantum mechanics that I read a few years back, before I knew who Yudowski was. I might go back through them when I get the time - I'll report to the thread if I do. The one thing I specifically remember is that he has a pet solution to the wave-function collapse problem (basically a variant on many-worlds) that he presents as settled scientific fact, to the point that he specifically mocks the Copehagen interpretation as nonsense. I remember that because it was the point where the Bells of Cognitive Dissonance began ringing in my head: though by no means a physicist myself, I was pretty sure I would have heard if a definitive explanation had been reached for wave-function collapse. Up until that point I'd been assuming the author of the sequence was an expert of some kind. (I'm... a little gullible sometimes.) Alien Arcana fucked around with this message at 21:15 on Apr 24, 2014 |
# ? Apr 24, 2014 21:11 |
|
The singularity assumes you can make an AI smarter than you. Then you can multiple copies in parallel in a data center, and moore's law applies, so they can improve themselves very quickly. So now you have an all-knowing oracle in a box. You say, "what should we do?". And it gives you some answers. And then there is literally no way you can possibly know if those answers are right. You can't debug something that is supposed to completely transcend you. So you can only make use of an infinitely intelligent AI if, given the correct answer to any question, you could be convinced it was true through a reasonably short rational argument.
|
# ? Apr 24, 2014 22:21 |
|
Wait, so it's accepted as truth in LessWrong that 0 and 1 are not valid probabilities, and that it is possible to simulate a future event with perfect accuracy. How the gently caress does he square that circle? If I run a perfect simulation of a future event, the probability of that event happening as I predicted it is _______. Does he ever address this?
|
# ? Apr 24, 2014 23:15 |
|
As far as I can tell, no, but that's because he considers Timeless Decision Theory to be applicable to all cases of predictions, including as low as a 65% chance of accurately predicting a yes or no style question. The problem with this is, as other people have pointed out, without perfect accuracy, it's not weird timetraveling predictive decision bullshit. It's just predictions, which are covered under normal theories of decision-making.
|
# ? Apr 24, 2014 23:58 |
|
Alien Arcana posted:There's a long sequence on quantum mechanics that I read a few years back, before I knew who Yudowski was. I might go back through them when I get the time - I'll report to the thread if I do. Really, it's a crying shame Yudkowsky has never been up for a Nobel prize in physics.
|
# ? Apr 25, 2014 00:58 |
|
Djeser posted:As far as I can tell, no, but that's because he considers Timeless Decision Theory to be applicable to all cases of predictions, including as low as a 65% chance of accurately predicting a yes or no style question. The problem with this is, as other people have pointed out, without perfect accuracy, it's not weird timetraveling predictive decision bullshit. It's just predictions, which are covered under normal theories of decision-making. But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't?
|
# ? Apr 25, 2014 13:33 |
|
Mr. Sunshine posted:But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't? Nothing. Other than getting a weirdo neckbeard Internet-Famous
|
# ? Apr 25, 2014 14:04 |
|
Hey guys, I've got this drawing of Yudkowsky here, and I'm gonna draw an X on it unless he pays me money. This is exactly the same as me torturing the drawing, because I say that the drawing feels pain and it thinks it is Yudkowsky (see the thought bubble in the upper left corner). And as far as drawings go this is an exact replica of Yudkowsky to the point where you can't really be sure he's not just a drawing and that I'm not gonna draw the X on HIM. So he has to give me money otherwise he might have an X drawn on him. See attached on the back of the drawing my long list of rules (in easily visible crayon) for why drawings feel 3^^^^3 torturebux worth of pain when an x is drawn on them*. Also I can make photocopies if I need to. * this is the truth.
|
# ? Apr 25, 2014 14:04 |
|
The Vosgian Beast posted:Really, it's a crying shame Yudkowsky has never been up for a Nobel prize in physics. Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race! quote:And now even within Ravenclaw, his only remaining competitors were Padma Patil (whose parents came from a non-English-speaking culture and thus had raised her with an actual work ethic), Anthony Goldstein (out of a certain tiny ethnic group that won 25% of the Nobel Prizes),[...] What a shock he thinks his own ethnic group is inherently more intelligent
|
# ? Apr 25, 2014 14:13 |
Some of the discussion comments on those pages are amazing.quote:
|
|
# ? Apr 25, 2014 14:16 |
|
ol qwerty bastard posted:Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race! Not race, dude. He's talking about 'culture' and 'ethnicity.' It's different, because if he was talking about race he'd be racist and words hurt his delicate lazy feelings.
|
# ? Apr 25, 2014 14:23 |
|
|
# ? Jun 2, 2024 00:15 |
|
Mr. Sunshine posted:But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't? It does terrible things with causality and human psychology. Thinking of the real world for a moment, consider the possibility that you live in a fully deterministic universe. Every decision you make is predetermined, and a sufficiently cunning person (or AI) with access to the right data could flawlessly model your entire life. None of this helps answer the question "What would you like for breakfast?" This is because we have limited knowledge of the future, so even if the universe is fully determined we have to live on the assumption that it isn't. Now, TDT takes that away. You now have 100% complete knowledge of the future. What do you have for breakfast? The answer is easy - you look ahead to your next breakfast and see the confluence of events which led to you choosing toast. So you choose to eat toast. Now, as your prediction of the future is perfect you must eat toast. You're going to eat toast for breakfast for no other reason than because you chose to eat toast, and you chose to eat toast because you're going to eat toast, and so on ad infinitum. Every decision becomes a stable time loop, the outcome only the outcome because you know what the outcome is. God knows what happens to a human consciousness exposed to this sort of thinking, but I expect it would implode in short order. But for more fun, imagine what happens when two of these future-predicting TDT minds meet. Essentially, any interaction between them becomes an infinite recursion of '...if you do that then I'd do this...' on both sides. In some (most?) situations they might arrive at a stable equilibrium (let's split the cake 50/50) and they'd do it instantly, but it's fairly trivial to construct a game theoretical position with no stable equilibrium-- in fact, I'll do it now. quote:AI Alice has two buttons marked X and Y. She wins if she and Bob press different buttons. AI Bob also has two buttons marked X and Y. He wins if they press the same buttons. The two AIs know about each other but cannot communicate. Two TDT minds cannot solve that problem without establishing a wider context i.e. one of them being okay with losing, or sabotaging the other's buttons or something. One TDT mind can, because its opponent will not have perfect knowledge of the future, but two are basically hosed. Of course, you could argue that we have free will - in which case perfect future predictions become impossible and so does TDT.
|
# ? Apr 25, 2014 14:37 |