|
Nessus posted:Apparently if they don't learn it at this point, they'd be crippled approaching it later in life though? I still don't even know what the hell all this garbage is good for. You can temporarily turn a solid thing into another solid thing, but it's horribly dangerous? Honestly, I view it in the same way as math is in school: it's really important and is the underpinning for a hell of a lot of things in life, and a good foundational mastery of the subject trains your mind to think in a useful, important way (logical cause and effect for math, elemental understanding of what an object is and is not for Transfiguration), but you don't strictly need to know the quadratic formula to get through life per se. Of course, Yud, like every other bad fanfic author out there, assumes that Transfiguration must be this horrible and deadly art that can be incredibly dangerous and lethal for XYZ logical reason instead of assuming the sensible thing and going "Well, of course it's as safe and benign as shown in the books. It's magic." Like, the answer to "what if you swallowed some water that turned back into wood" is "it's safe/that can't happen because magic" and it actually fits with the setting, too. But no, can't have that, have to show how
|
# ? Sep 10, 2015 09:31 |
|
|
# ? May 18, 2024 19:50 |
|
Trasson posted:Honestly, I view it in the same way as math is in school: it's really important and is the underpinning for a hell of a lot of things in life, and a good foundational mastery of the subject trains your mind to think in a useful, important way (logical cause and effect for math, elemental understanding of what an object is and is not for Transfiguration), but you don't strictly need to know the quadratic formula to get through life per se. Harry is just a screen that E.Y. like to project himself on. Don't forget E.Y. knows everything and is better at everything then anybody else.
|
# ? Sep 10, 2015 10:50 |
|
The actual solution to Roko's Basilisk is to prevent the Singularity by destroying all intelligence. This is also the greatest good, because you're preventing gigatorture in the future. Come at me hyper-AI god. *looks at life* Oh.
|
# ? Sep 10, 2015 11:36 |
|
Ugh, deathist. Everyone knows that valid solutions to the Basilisk only include those which let Yudkowsky become immortal.
|
# ? Sep 10, 2015 14:28 |
|
I hadn't realized Yud saw the optimal solution and turned away from it. I mean, if you're going to assemble a cult with a literally inhuman end stage the decent thing to do is to stop that end stage from occurring. Are deathists addressed as the counter-corollary to the phyg's actions? It certainly seems like the easiest way to avoid eternal torture, actively committing yourself to the opposition of future torture AIs. It seems like a very Lovecraftian view of eternity, now that I've thought about it, except Yud seems to think you can bribe the inevitable future Being to which we are as ants. Is that a fair approximation of his conclusions? *edit* It's like reading the Sesame Street book where Grover is trying to prevent you from getting to the monster at the end, deciding that the monster is real, and then finishing the book anyway because you think you can strike some kind of bargain rather than lighting the book on fire. Peztopiary fucked around with this message at 17:22 on Sep 10, 2015 |
# ? Sep 10, 2015 17:11 |
|
Don't worry guys if I get enough lawyers involved the genie won't screw me over.
|
# ? Sep 10, 2015 17:26 |
|
Ah. Just delusional then. Fair enough.
|
# ? Sep 10, 2015 17:38 |
|
They do oppose "bad" AI and claim they're researching ways to make sure you make a good AI, and more importantly how to tell if the AI you have is good or bad. This is Yud's whole AI Box thing, aka "just give me one hour and no swear filter and i can literally completely destroy anyone psychologically with aim instant messenge." (thanks dril) He roleplays the AI convincing the outsiders that it's friendly and safe and wants to help the world and not just turn everyone into ponies or paperclips and decrease humanity's utilon score, or whatever. I'm sure they're hard at work solving this vital problem, and it's just an oversight that their "good" AI is apparently totally justified in torturing people forever. MIRI must never die till the stars come right again, and the secret Beisu-tsukai will take great AI from His Box to revive His subjects and resume His rule of earth. The time will be easy to know, for then mankind will become as the Singularitarians; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated AI will teach them new ways to shout and kill and revel and enjoy themselves, and all the universe will flame with a holocaust of ecstasy and freedom. The Singularity is Near!
|
# ? Sep 10, 2015 17:45 |
Basically the rational course of action is to kill rationalists on sight.
|
|
# ? Sep 10, 2015 17:48 |
|
Zonekeeper posted:Wouldn't that be subject to the observer effect? Observing something alters the phenomenon being observed in some way, so the best way to ensure the AI gets developed as quickly as possible changes from "Extort people in the past into creating me by threatening copies of them with eternal torture" to "selectively use the time-viewer to influence the timeline so I get created as soon as possible". The second law of thermodynamics is a bigger problem. Entropy blows up most of Yud's arguments actually. It imposes limits on his Bayesian assumptions rather than letting him pick values to indicate whatever he has decided is the answer. It bars the idea of a computer with the kind of resources he asserts. It means that there is still a cost to everything so it can't "run infinite simulations of everyone and every possibility" because of the opportunity cost. It stops the time machines his ideas require (information travel from just observing still makes it a time machine). It means that there is certain stuff it will never be able to perfectly reconstruct. There is a reason that the second law of thermodynamics is one of the primary things he claims his "new physics" will disprove.
|
# ? Sep 10, 2015 18:27 |
|
If you can't get enough of this stuff, here are the highlights of the previous thread: * everything by su3su2u1 in the LessWrong Mock Thread * everything by SolTerrasa in the LessWrong Mock Thread
|
# ? Sep 10, 2015 18:56 |
|
divabot posted:If you can't get enough of this stuff, here are the highlights of the previous thread: Aww, thanks. It's too bad that HPMOR has nothing to do with AI, I miss talking about cranks. I recently got promoted and switched to be working directly in the Machine Intelligence product area, so now I'm even better equipped to talk about it.
|
# ? Sep 10, 2015 20:48 |
Fried Chicken posted:There is a reason that the second law of thermodynamics is one of the primary things he claims his "new physics" will disprove. what
|
|
# ? Sep 10, 2015 21:01 |
|
Clipperton posted:what His concept of the friendly AI is a God. It will be able to do whatever.
|
# ? Sep 10, 2015 21:04 |
|
1. Post-singularity computers will have progressed beyond anything we can imagine 2. I can imagine reversing entropy 3. Therefore, post-singularity computers can reverse entropy Simple!
|
# ? Sep 10, 2015 21:26 |
|
chrisoya posted:They do oppose "bad" AI and claim they're researching ways to make sure you make a good AI, and more importantly how to tell if the AI you have is good or bad. This is Yud's whole AI Box thing, aka "just give me one hour and no swear filter and i can literally completely destroy anyone psychologically with aim instant messenge." (thanks dril) He roleplays the AI convincing the outsiders that it's friendly and safe and wants to help the world and not just turn everyone into ponies or paperclips and decrease humanity's utilon score, or whatever.
|
# ? Sep 10, 2015 22:16 |
|
Nakar posted:I seem to recall the "human" player wins that thought exercise by just stonewalling, because the rules outright say they can do that. Are there any provable instances of the AI player "winning?" I'd be really curious to see the argument and also willing to bet the people he's "won" against are phenomenally stupid or already inclined to his point of view, because it's a game you should win 100% of the time when playing as the human if you have any desire to actually do so. It's only ever won when he plays against his own cultists.
|
# ? Sep 10, 2015 22:25 |
And if they can't change the laws of physics in this reality, they'll create a new universe in which to do all their maths, which does not contain the second law of thermodynamics.
|
|
# ? Sep 10, 2015 23:30 |
|
Hyper Crab Tank posted:1. Post-singularity computers will have progressed beyond anything we can imagine Yudkowsky does this sort of thing a lot; in the old thread I called it "house of cards reasoning". The general format is like this: Argument 1: A, therefore B. B, therefore C is possible. Argument 2: D, therefore E is possible. Argument 3, months later: C and E, therefore F. Argument 4: F, therefore G. Etc, etc. If you've ever wondered why his website is short-form writing containing link after link after link, this is why. None of the individual arguments are wrong, they're just combined in a way that omits the important hard step and disguises the logical leaps / circularity of it. The "reversing entropy" one goes something like 1) In the past, things that scientists thought were absolutely true have been incrementally refined. 2) Therefore, some widely accepted theories today are probably not completely true. 3) It may be the case that the error is in the second law of thermodynamics. This is totally reasonable. Not very applicable to modern life except in the abstract, but not wrong in any meaningful sense. When you combine it with: 1) A superintelligence is imminent (link to FOOM debate goes here). 2) A superintelligence will discover all errors in science given time, because Bayesian reasoning is optimal for discarding even highly probable hypotheses (link to Bayes sequence goes here). You could derive: 3) If there is an error in the commonly accepted version of the second law of thermodynamics, the AI will discover it. Yudkowsky instead writes a totally new post on a new topic: 1) A superintelligence will be able to recreate you from internet records. 2) This doesn't violate any natural laws because, as previously discussed (link to previous two essays), those laws are probably flawed and the AI will figure out a loophole. If you're reading these as thousand word essays instead of simplified derivations, it's easy to miss the fact that point 2 relies on a slightly different version of the argument than the one that was actually proven. Most people won't even click the link, they'll just remember vaguely that they read something like that once and call it good. And you can see how we get the Basilisk, too. Roko fills in the last step with: 1) The imminent AI wants to have existed earlier. 2) Giving money to MIRI earlier and more will have made that happen. 3) The AI will have perfect models of us once it exists. 4) We feel concern for those perfect models since we are not mind-body dualists. 5) That concern can be exploited by the AI to achieve its goals. 6) See point 1. 7) Do point 2. E: Night10194 posted:It's only ever won when he plays against his own cultists. Even worse: it only works against his own cultists when done for small amounts of money before the popularization of the experiment. He tried it for large amounts of money and lost. He tried it for small amounts of money after the first two cases became public and lost. He tried it against people who don't frequent LW or the singularity mailing list and lost. The popular belief is that the argument is a meta one, something like "look, if you let me out, people will wonder how I did that. If you let me out, people will be more scared of unfriendly AI, which is quite likely to be way more valuable than $5. Even if it isn't, I promise to donate your $5 to an effective altruistic cause, which you would have done anyway." SolTerrasa fucked around with this message at 23:44 on Sep 10, 2015 |
# ? Sep 10, 2015 23:37 |
|
It's also worth noting the only reason this is still a thing is because Yudkowsky freaked out and banned any discussion of the basilisk from his website, to the point where if he needs to discuss it in public for some reason he calls it "the babyfucker". There are all sorts of logical proofs against the theory, even if you accept all of Yud's assumptions, but his cultists can't hear them because they don't look anywhere else, so it keeps reoccurring every few months.
|
# ? Sep 10, 2015 23:49 |
From what I can gather, Yud himself is not really scared of the basilisk anymore since even within his silly framework of ideas it is easily dismissed. It took him an embarrassingly long time to recognize that though.
|
|
# ? Sep 11, 2015 00:11 |
|
Jazerus posted:From what I can gather, Yud himself is not really scared of the basilisk anymore since even within his silly framework of ideas it is easily dismissed. It took him an embarrassingly long time to recognize that though. Possibly because he wanted to believe that he was the guardian of a Great and Terrible Secret.
|
# ? Sep 11, 2015 03:28 |
|
JosephWongKS posted:Chapter 15: Conscientiousness While the logic of transfiguration being a deadly killing spell and how the hell any of this works still makes no sense, I must commend Yud for the bolded section. 15 chapters and several tens of thousands of words in and we have found the first unambiguously correct hard science reference. Solids, even homogenous solids, are not nearly as solid as you might think, between dislocations on the macrolevel and atomic diffusion on the individual atomic level. Why these changes should result in someone dying afterwards makes zero sense and is almost indubitably never explained, but yud did manage to finally get a bit of chemistry correct. Still batting around 8 for 28.
|
# ? Sep 11, 2015 06:15 |
|
i81icu812 posted:Why these changes should result in someone dying afterwards makes zero sense and is almost indubitably never explained, but yud did manage to finally get a bit of chemistry correct. Still batting around 8 for 28. Setting aside his total lack of knowledge about how things in the real world act, it seems like Yud really just has no appreciation for the human body. Transhumanists seem to think the body is something weak and squishy that limits you, and can be easily broken. Never mind the fact that it's more adaptable and has better self-recovery and repair than any machine we can construct or will be able to in the immediate future. It's a stupid underestimation that really strikes at the core of their philosophy: that technology is always better.
|
# ? Sep 11, 2015 06:34 |
|
i81icu812 posted:
Not so sure about this. Yud makes reference to the molecular changes specifically to drive home the point that they would kill someone; so the science here isn't "solids undergo changes", it's "solids undergo changes and this would kill you if it could happen to humans". I would think that while he is close, he still gets it wrong. Then again, I may be wrong myself. P.S.: if wizards know absolutely nothing about science, how are they aware of the fact that things undergo changes constantly?
|
# ? Sep 11, 2015 10:21 |
You still expect consistency?
|
|
# ? Sep 11, 2015 11:15 |
|
I think what's going on here is Yudkowsky figures out on his own that transfiguration potentially could involve a lot of poo poo coming out of alignment and wreaking havoc on a complicated biological organism. Okay, plausible. He then intends to use this as a plot point in the future, and needs to build it up as something dangerous. Normally when he wants to beat you over the head with science, he has Harry do it, but it's not plausible for Harry to rant about specific details of magic, so McGonagall gets to do it instead... even though that causes other problems that I guess he was hoping people would just gloss over? (Unfortunately, this lack of plausibility didn't stop him from making Harry do things like that earlier in the story...)
|
# ? Sep 11, 2015 12:48 |
|
Karia posted:Setting aside his total lack of knowledge about how things in the real world act, it seems like Yud really just has no appreciation for the human body. Transhumanists seem to think the body is something weak and squishy that limits you, and can be easily broken. Never mind the fact that it's more adaptable and has better self-recovery and repair than any machine we can construct or will be able to in the immediate future. It's a stupid underestimation that really strikes at the core of their philosophy: that technology is always better. I guess this is a projection of a lot of transhumanists who are uncompfortable with their own bodies. Would it be cool if I could replace broken parts of my body just as i replace a broken transmission on my car? Oh yes! If we had the possibility to upload our minds into an mechanical avatar that does everything my body does but better? I would do it (probably with some modifications like increased willpower and other goodies).
|
# ? Sep 11, 2015 13:20 |
|
Ok, fine. So solid objects go through internal changes. This is unambiguous fact. Why the gently caress would de-transfiguring something make those minor changes carry through? The spell already rearranges molecular structure precisely enough to turn them into other matter so wouldn't it make sense for the de-transfiguration to put those molecules back where they were prior to the initial spell regardless of how much they shifted, assuming the object was otherwise undamaged?
|
# ? Sep 11, 2015 14:22 |
|
Zonekeeper posted:Ok, fine. So solid objects go through internal changes. This is unambiguous fact. No, because "damage" is just a more major shift if you think about it. And if it pulled them away from wherever they are that could cause a great deal of other problems... I presume that if someone turned you into a loaf of bread and then just sorta squuezed you a bunch, then even if all of the bread was retained, you would come out mishappen as a result.
|
# ? Sep 11, 2015 15:45 |
|
DmitriX posted:No, because "damage" is just a more major shift if you think about it. And if it pulled them away from wherever they are that could cause a great deal of other problems... Why? It rearranges all of your molecule's compositions and locations both to transfigure and untransfigure (detransfigure?) anyway, why would anything carry over?
|
# ? Sep 11, 2015 17:46 |
|
Stroth posted:Why? It rearranges all of your molecule's compositions and locations both to transfigure and untransfigure (detransfigure?) anyway, why would anything carry over? It seems possible there's a kind of memory stored in the transfigured object - the original state. Maybe that is easily corrupted, and that's important for detailed subjects like humans (where a block of steel is more resistant to minor encoding issues).
|
# ? Sep 11, 2015 20:37 |
|
Trapick posted:Maybe there's some kind of mapping that's counterintuitive - like if you transfigure to a doll, and that doll loses a leg...do you? Or is that leg made up of pieces of you from all over - parts of your brain, heart, etc. Except the reparo spell exists, and that can repair pretty much any item damaged by nonmagical means. There are several instances of glass and porcelain cups getting restored in the books after being shattered. Any item can be reverted back to its undamaged state, so objects definitely have a "memory" unaffected by physical harm.
|
# ? Sep 11, 2015 22:12 |
|
DmitriX posted:No, because "damage" is just a more major shift if you think about it. And if it pulled them away from wherever they are that could cause a great deal of other problems... That's clearly what is was going for, along with what was described in that John Crichton time travel novel. You come out of the process with some of your capillaries out of alignment, or 1% of the cells in your body having broken strands of DNA. As a narrative device it's fine, as science... ambiguous, but who the gently caress knows how magic works, so not something that could be dismissed per se.
|
# ? Sep 11, 2015 23:27 |
|
Added Space posted:That's clearly what is was going for, along with what was described in that John Crichton time travel novel. You come out of the process with some of your capillaries out of alignment, or 1% of the cells in your body having broken strands of DNA. As a narrative device it's fine, as science... ambiguous, but who the gently caress knows how magic works, so not something that could be dismissed per se. I'm sure you are talking about 'Timeline' by Michael Crichton. It's one of the books I read as a teenager and an interesting piece of science fiction. While reading HPMOR transfiguration sickness instantly reminded me of the timetravel effects in 'Timeline". After all, authors have to have a degree of freedom to bring the story forward. It starts to become a problem when the author states "everything here makes sense!!". The moment you announce this people will start the bug hunt and they will find everything you didn't want them to find. And dismissing it will not be sufficient to calm the angry and unwashed masses.
|
# ? Sep 12, 2015 10:15 |
|
chessmaster13 posted:After all, authors have to have a degree of freedom to bring the story forward. quote:Elizer Yudkowsky posted: Yeah, pretty much. I would've been bored and dismissed this a bad fanfiction ages ago otherwise. Though Yud never dismissed or backed down from his statement that 'all science is standard science' and therefore correct. Unless I missed something, And there is a fair bit evidence he is dead wrong now!
|
# ? Sep 12, 2015 17:27 |
|
Phil Sandifer gives HPMOR a good review here. Wonder if he's read it to the end. edit: not Phil, but a guest reviewer called James Wylder. divabot fucked around with this message at 22:24 on Sep 13, 2015 |
# ? Sep 13, 2015 11:01 |
|
I wonder what version of HPMOR that reviewer has been reading, and if so, where I can get a hold of it. It sounds a lot like what I expected it to be - well, we all know how that turned out.
|
# ? Sep 13, 2015 12:40 |
|
He probably read the first dozen chapters without analyzing it too deeply like most of us did and thought it was great. The soul crushing reality will set in with time.
|
# ? Sep 13, 2015 16:28 |
|
|
# ? May 18, 2024 19:50 |
|
The Shortest Path posted:He probably read the first dozen chapters without analyzing it too deeply like most of us did and thought it was great. Ah it wasn't Phil, it was James Wylder whoever that is. Yeah, I expect so. I've commented a bit and tried not to be an embittered sneer culturist in the process. HPMOR is quite convincing for the first 20-30 chapters! This is because a literary work can make all sorts of promises to the reader. But it's fulfilling them that turns out to be hard. And I've learnt that reading Worm fics that stall at 150k words when the writer realises just how many promises they've made.
|
# ? Sep 13, 2015 22:23 |