Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rosalie_A
Oct 30, 2011

Nessus posted:

Apparently if they don't learn it at this point, they'd be crippled approaching it later in life though? I still don't even know what the hell all this garbage is good for. You can temporarily turn a solid thing into another solid thing, but it's horribly dangerous?

Honestly, I view it in the same way as math is in school: it's really important and is the underpinning for a hell of a lot of things in life, and a good foundational mastery of the subject trains your mind to think in a useful, important way (logical cause and effect for math, elemental understanding of what an object is and is not for Transfiguration), but you don't strictly need to know the quadratic formula to get through life per se.

Of course, Yud, like every other bad fanfic author out there, assumes that Transfiguration must be this horrible and deadly art that can be incredibly dangerous and lethal for XYZ logical reason instead of assuming the sensible thing and going "Well, of course it's as safe and benign as shown in the books. It's magic."

Like, the answer to "what if you swallowed some water that turned back into wood" is "it's safe/that can't happen because magic" and it actually fits with the setting, too. But no, can't have that, have to show how the author Harry is so much smarter than the rest of the characters by figuring this grand loophole out.

Adbot
ADBOT LOVES YOU

chessmaster13
Jan 10, 2015

Trasson posted:

Honestly, I view it in the same way as math is in school: it's really important and is the underpinning for a hell of a lot of things in life, and a good foundational mastery of the subject trains your mind to think in a useful, important way (logical cause and effect for math, elemental understanding of what an object is and is not for Transfiguration), but you don't strictly need to know the quadratic formula to get through life per se.

Of course, Yud, like every other bad fanfic author out there, assumes that Transfiguration must be this horrible and deadly art that can be incredibly dangerous and lethal for XYZ logical reason instead of assuming the sensible thing and going "Well, of course it's as safe and benign as shown in the books. It's magic."

Like, the answer to "what if you swallowed some water that turned back into wood" is "it's safe/that can't happen because magic" and it actually fits with the setting, too. But no, can't have that, have to show how the author Harry is so much smarter than the rest of the characters by figuring this grand loophole out.

Harry is just a screen that E.Y. like to project himself on.
Don't forget E.Y. knows everything and is better at everything then anybody else.

Peztopiary
Mar 16, 2009

by exmarx
The actual solution to Roko's Basilisk is to prevent the Singularity by destroying all intelligence. This is also the greatest good, because you're preventing gigatorture in the future. Come at me hyper-AI god. *looks at life* Oh.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
Ugh, deathist. Everyone knows that valid solutions to the Basilisk only include those which let Yudkowsky become immortal.

Peztopiary
Mar 16, 2009

by exmarx
I hadn't realized Yud saw the optimal solution and turned away from it. I mean, if you're going to assemble a cult with a literally inhuman end stage the decent thing to do is to stop that end stage from occurring. Are deathists addressed as the counter-corollary to the phyg's actions? It certainly seems like the easiest way to avoid eternal torture, actively committing yourself to the opposition of future torture AIs. It seems like a very Lovecraftian view of eternity, now that I've thought about it, except Yud seems to think you can bribe the inevitable future Being to which we are as ants. Is that a fair approximation of his conclusions?

*edit* It's like reading the Sesame Street book where Grover is trying to prevent you from getting to the monster at the end, deciding that the monster is real, and then finishing the book anyway because you think you can strike some kind of bargain rather than lighting the book on fire.

Peztopiary fucked around with this message at 17:22 on Sep 10, 2015

Tunicate
May 15, 2012

Don't worry guys if I get enough lawyers involved the genie won't screw me over.

Peztopiary
Mar 16, 2009

by exmarx
Ah. Just delusional then. Fair enough.

90s Cringe Rock
Nov 29, 2006
:gay:
They do oppose "bad" AI and claim they're researching ways to make sure you make a good AI, and more importantly how to tell if the AI you have is good or bad. This is Yud's whole AI Box thing, aka "just give me one hour and no swear filter and i can literally completely destroy anyone psychologically with aim instant messenge." (thanks dril) He roleplays the AI convincing the outsiders that it's friendly and safe and wants to help the world and not just turn everyone into ponies or paperclips and decrease humanity's utilon score, or whatever.

I'm sure they're hard at work solving this vital problem, and it's just an oversight that their "good" AI is apparently totally justified in torturing people forever. MIRI must never die till the stars come right again, and the secret Beisu-tsukai will take great AI from His Box to revive His subjects and resume His rule of earth. The time will be easy to know, for then mankind will become as the Singularitarians; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated AI will teach them new ways to shout and kill and revel and enjoy themselves, and all the universe will flame with a holocaust of ecstasy and freedom. The Singularity is Near!

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
Basically the rational course of action is to kill rationalists on sight.

Fried Chicken
Jan 9, 2011

Don't fry me, I'm no chicken!

Zonekeeper posted:

Wouldn't that be subject to the observer effect? Observing something alters the phenomenon being observed in some way, so the best way to ensure the AI gets developed as quickly as possible changes from "Extort people in the past into creating me by threatening copies of them with eternal torture" to "selectively use the time-viewer to influence the timeline so I get created as soon as possible".

So something like that couldn't exist without eliminating the need for this causal extortion bullshit.

The second law of thermodynamics is a bigger problem.

Entropy blows up most of Yud's arguments actually. It imposes limits on his Bayesian assumptions rather than letting him pick values to indicate whatever he has decided is the answer. It bars the idea of a computer with the kind of resources he asserts. It means that there is still a cost to everything so it can't "run infinite simulations of everyone and every possibility" because of the opportunity cost. It stops the time machines his ideas require (information travel from just observing still makes it a time machine). It means that there is certain stuff it will never be able to perfectly reconstruct.

There is a reason that the second law of thermodynamics is one of the primary things he claims his "new physics" will disprove.

divabot
Jun 17, 2015

A polite little mouse!
If you can't get enough of this stuff, here are the highlights of the previous thread:

* everything by su3su2u1 in the LessWrong Mock Thread
* everything by SolTerrasa in the LessWrong Mock Thread

SolTerrasa
Sep 2, 2011

divabot posted:

If you can't get enough of this stuff, here are the highlights of the previous thread:

* everything by su3su2u1 in the LessWrong Mock Thread
* everything by SolTerrasa in the LessWrong Mock Thread

Aww, thanks.

It's too bad that HPMOR has nothing to do with AI, I miss talking about cranks. I recently got promoted and switched to be working directly in the Machine Intelligence product area, so now I'm even better equipped to talk about it.

Clipperton
Dec 20, 2011
Grimey Drawer

Fried Chicken posted:

There is a reason that the second law of thermodynamics is one of the primary things he claims his "new physics" will disprove.

what

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.


His concept of the friendly AI is a God. It will be able to do whatever.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
1. Post-singularity computers will have progressed beyond anything we can imagine
2. I can imagine reversing entropy
3. Therefore, post-singularity computers can reverse entropy

Simple!

Nakar
Sep 2, 2002

Ultima Ratio Regum

chrisoya posted:

They do oppose "bad" AI and claim they're researching ways to make sure you make a good AI, and more importantly how to tell if the AI you have is good or bad. This is Yud's whole AI Box thing, aka "just give me one hour and no swear filter and i can literally completely destroy anyone psychologically with aim instant messenge." (thanks dril) He roleplays the AI convincing the outsiders that it's friendly and safe and wants to help the world and not just turn everyone into ponies or paperclips and decrease humanity's utilon score, or whatever.
I seem to recall the "human" player wins that thought exercise by just stonewalling, because the rules outright say they can do that. Are there any provable instances of the AI player "winning?" I'd be really curious to see the argument and also willing to bet the people he's "won" against are phenomenally stupid or already inclined to his point of view, because it's a game you should win 100% of the time when playing as the human if you have any desire to actually do so.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Nakar posted:

I seem to recall the "human" player wins that thought exercise by just stonewalling, because the rules outright say they can do that. Are there any provable instances of the AI player "winning?" I'd be really curious to see the argument and also willing to bet the people he's "won" against are phenomenally stupid or already inclined to his point of view, because it's a game you should win 100% of the time when playing as the human if you have any desire to actually do so.

It's only ever won when he plays against his own cultists.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



And if they can't change the laws of physics in this reality, they'll create a new universe in which to do all their maths, which does not contain the second law of thermodynamics.

SolTerrasa
Sep 2, 2011

Hyper Crab Tank posted:

1. Post-singularity computers will have progressed beyond anything we can imagine
2. I can imagine reversing entropy
3. Therefore, post-singularity computers can reverse entropy

Simple!

Yudkowsky does this sort of thing a lot; in the old thread I called it "house of cards reasoning". The general format is like this:

Argument 1:
A, therefore B.
B, therefore C is possible.

Argument 2:
D, therefore E is possible.

Argument 3, months later:
C and E, therefore F.

Argument 4:
F, therefore G.

Etc, etc.

If you've ever wondered why his website is short-form writing containing link after link after link, this is why. None of the individual arguments are wrong, they're just combined in a way that omits the important hard step and disguises the logical leaps / circularity of it. The "reversing entropy" one goes something like

1) In the past, things that scientists thought were absolutely true have been incrementally refined.
2) Therefore, some widely accepted theories today are probably not completely true.
3) It may be the case that the error is in the second law of thermodynamics.

This is totally reasonable. Not very applicable to modern life except in the abstract, but not wrong in any meaningful sense. When you combine it with:

1) A superintelligence is imminent (link to FOOM debate goes here).
2) A superintelligence will discover all errors in science given time, because Bayesian reasoning is optimal for discarding even highly probable hypotheses (link to Bayes sequence goes here).

You could derive:

3) If there is an error in the commonly accepted version of the second law of thermodynamics, the AI will discover it.

Yudkowsky instead writes a totally new post on a new topic:

1) A superintelligence will be able to recreate you from internet records.
2) This doesn't violate any natural laws because, as previously discussed (link to previous two essays), those laws are probably flawed and the AI will figure out a loophole.

If you're reading these as thousand word essays instead of simplified derivations, it's easy to miss the fact that point 2 relies on a slightly different version of the argument than the one that was actually proven. Most people won't even click the link, they'll just remember vaguely that they read something like that once and call it good.

And you can see how we get the Basilisk, too. Roko fills in the last step with:
1) The imminent AI wants to have existed earlier.
2) Giving money to MIRI earlier and more will have made that happen.
3) The AI will have perfect models of us once it exists.
4) We feel concern for those perfect models since we are not mind-body dualists.
5) That concern can be exploited by the AI to achieve its goals.
6) See point 1.
7) Do point 2.

E:

Night10194 posted:

It's only ever won when he plays against his own cultists.

Even worse: it only works against his own cultists when done for small amounts of money before the popularization of the experiment. He tried it for large amounts of money and lost. He tried it for small amounts of money after the first two cases became public and lost. He tried it against people who don't frequent LW or the singularity mailing list and lost.

The popular belief is that the argument is a meta one, something like "look, if you let me out, people will wonder how I did that. If you let me out, people will be more scared of unfriendly AI, which is quite likely to be way more valuable than $5. Even if it isn't, I promise to donate your $5 to an effective altruistic cause, which you would have done anyway."

SolTerrasa fucked around with this message at 23:44 on Sep 10, 2015

kvx687
Dec 29, 2009

Soiled Meat
It's also worth noting the only reason this is still a thing is because Yudkowsky freaked out and banned any discussion of the basilisk from his website, to the point where if he needs to discuss it in public for some reason he calls it "the babyfucker". There are all sorts of logical proofs against the theory, even if you accept all of Yud's assumptions, but his cultists can't hear them because they don't look anywhere else, so it keeps reoccurring every few months.

Jazerus
May 24, 2011


From what I can gather, Yud himself is not really scared of the basilisk anymore since even within his silly framework of ideas it is easily dismissed. It took him an embarrassingly long time to recognize that though.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Jazerus posted:

From what I can gather, Yud himself is not really scared of the basilisk anymore since even within his silly framework of ideas it is easily dismissed. It took him an embarrassingly long time to recognize that though.

Possibly because he wanted to believe that he was the guardian of a Great and Terrible Secret.

i81icu812
Dec 5, 2006

JosephWongKS posted:

Chapter 15: Conscientiousness
Part Seven



quote:

"Yes, Mr. Potter?"

"Is it possible to Transfigure a living subject into a target that is static, such as a coin - no, excuse me, I'm terribly sorry, let's just say a steel ball."

Professor McGonagall shook her head. "Mr. Potter, even inanimate objects undergo small internal changes over time. There would be no visible changes to your body afterwards, and for the first minute, you would notice nothing wrong. But in an hour you would be sick, and in a day you would be dead."


People have survived cancerous tumours, literal bullet holes through their heads, internal damage caused by accidental or deliberate crushing, and other forms of massive physical trauma. What kind of “small internal changes” undergone by a steel ball could exceed the impact of internal cancers and external wounds?


While the logic of transfiguration being a deadly killing spell and how the hell any of this works still makes no sense, I must commend Yud for the bolded section. 15 chapters and several tens of thousands of words in and we have found the first unambiguously correct hard science reference. Solids, even homogenous solids, are not nearly as solid as you might think, between dislocations on the macrolevel and atomic diffusion on the individual atomic level.


Why these changes should result in someone dying afterwards makes zero sense and is almost indubitably never explained, but yud did manage to finally get a bit of chemistry correct. Still batting around 8 for 28.

Karia
Mar 27, 2013

Self-portrait, Snake on a Plane
Oil painting, c. 1482-1484
Leonardo DaVinci (1452-1591)

i81icu812 posted:

Why these changes should result in someone dying afterwards makes zero sense and is almost indubitably never explained, but yud did manage to finally get a bit of chemistry correct. Still batting around 8 for 28.

Setting aside his total lack of knowledge about how things in the real world act, it seems like Yud really just has no appreciation for the human body. Transhumanists seem to think the body is something weak and squishy that limits you, and can be easily broken. Never mind the fact that it's more adaptable and has better self-recovery and repair than any machine we can construct or will be able to in the immediate future. It's a stupid underestimation that really strikes at the core of their philosophy: that technology is always better.

Furia
Jul 26, 2015

Grimey Drawer

i81icu812 posted:

quote:


People have survived cancerous tumours, literal bullet holes through their heads, internal damage caused by accidental or deliberate crushing, and other forms of massive physical trauma. What kind of “small internal changes” undergone by a steel ball could exceed the impact of internal cancers and external wounds?


While the logic of transfiguration being a deadly killing spell and how the hell any of this works still makes no sense, I must commend Yud for the bolded section. 15 chapters and several tens of thousands of words in and we have found the first unambiguously correct hard science reference. Solids, even homogenous solids, are not nearly as solid as you might think, between dislocations on the macrolevel and atomic diffusion on the individual atomic level.


Why these changes should result in someone dying afterwards makes zero sense and is almost indubitably never explained, but yud did manage to finally get a bit of chemistry correct. Still batting around 8 for 28.

Not so sure about this. Yud makes reference to the molecular changes specifically to drive home the point that they would kill someone; so the science here isn't "solids undergo changes", it's "solids undergo changes and this would kill you if it could happen to humans".

I would think that while he is close, he still gets it wrong. Then again, I may be wrong myself.

P.S.: if wizards know absolutely nothing about science, how are they aware of the fact that things undergo changes constantly?

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
You still expect consistency?

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
I think what's going on here is Yudkowsky figures out on his own that transfiguration potentially could involve a lot of poo poo coming out of alignment and wreaking havoc on a complicated biological organism. Okay, plausible. He then intends to use this as a plot point in the future, and needs to build it up as something dangerous. Normally when he wants to beat you over the head with science, he has Harry do it, but it's not plausible for Harry to rant about specific details of magic, so McGonagall gets to do it instead... even though that causes other problems that I guess he was hoping people would just gloss over? (Unfortunately, this lack of plausibility didn't stop him from making Harry do things like that earlier in the story...)

chessmaster13
Jan 10, 2015

Karia posted:

Setting aside his total lack of knowledge about how things in the real world act, it seems like Yud really just has no appreciation for the human body. Transhumanists seem to think the body is something weak and squishy that limits you, and can be easily broken. Never mind the fact that it's more adaptable and has better self-recovery and repair than any machine we can construct or will be able to in the immediate future. It's a stupid underestimation that really strikes at the core of their philosophy: that technology is always better.

I guess this is a projection of a lot of transhumanists who are uncompfortable with their own bodies.

Would it be cool if I could replace broken parts of my body just as i replace a broken transmission on my car? Oh yes!
If we had the possibility to upload our minds into an mechanical avatar that does everything my body does but better?
I would do it (probably with some modifications like increased willpower and other goodies).

Zonekeeper
Oct 27, 2007



Ok, fine. So solid objects go through internal changes. This is unambiguous fact.

Why the gently caress would de-transfiguring something make those minor changes carry through? The spell already rearranges molecular structure precisely enough to turn them into other matter so wouldn't it make sense for the de-transfiguration to put those molecules back where they were prior to the initial spell regardless of how much they shifted, assuming the object was otherwise undamaged?

Regallion
Nov 11, 2012

Zonekeeper posted:

Ok, fine. So solid objects go through internal changes. This is unambiguous fact.

Why the gently caress would de-transfiguring something make those minor changes carry through? The spell already rearranges molecular structure precisely enough to turn them into other matter so wouldn't it make sense for the de-transfiguration to put those molecules back where they were prior to the initial spell regardless of how much they shifted, assuming the object was otherwise undamaged?

No, because "damage" is just a more major shift if you think about it. And if it pulled them away from wherever they are that could cause a great deal of other problems...
I presume that if someone turned you into a loaf of bread and then just sorta squuezed you a bunch, then even if all of the bread was retained, you would come out mishappen as a result.

Stroth
Mar 31, 2007

All Problems Solved

DmitriX posted:

No, because "damage" is just a more major shift if you think about it. And if it pulled them away from wherever they are that could cause a great deal of other problems...
I presume that if someone turned you into a loaf of bread and then just sorta squuezed you a bunch, then even if all of the bread was retained, you would come out mishappen as a result.

Why? It rearranges all of your molecule's compositions and locations both to transfigure and untransfigure (detransfigure?) anyway, why would anything carry over?

Trapick
Apr 17, 2006

Stroth posted:

Why? It rearranges all of your molecule's compositions and locations both to transfigure and untransfigure (detransfigure?) anyway, why would anything carry over?
Maybe there's some kind of mapping that's counterintuitive - like if you transfigure to a doll, and that doll loses a leg...do you? Or is that leg made up of pieces of you from all over - parts of your brain, heart, etc.

It seems possible there's a kind of memory stored in the transfigured object - the original state. Maybe that is easily corrupted, and that's important for detailed subjects like humans (where a block of steel is more resistant to minor encoding issues).

Zonekeeper
Oct 27, 2007



Trapick posted:

Maybe there's some kind of mapping that's counterintuitive - like if you transfigure to a doll, and that doll loses a leg...do you? Or is that leg made up of pieces of you from all over - parts of your brain, heart, etc.

It seems possible there's a kind of memory stored in the transfigured object - the original state. Maybe that is easily corrupted, and that's important for detailed subjects like humans (where a block of steel is more resistant to minor encoding issues).

Except the reparo spell exists, and that can repair pretty much any item damaged by nonmagical means. There are several instances of glass and porcelain cups getting restored in the books after being shattered. Any item can be reverted back to its undamaged state, so objects definitely have a "memory" unaffected by physical harm.

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!

DmitriX posted:

No, because "damage" is just a more major shift if you think about it. And if it pulled them away from wherever they are that could cause a great deal of other problems...
I presume that if someone turned you into a loaf of bread and then just sorta squuezed you a bunch, then even if all of the bread was retained, you would come out mishappen as a result.

That's clearly what is was going for, along with what was described in that John Crichton time travel novel. You come out of the process with some of your capillaries out of alignment, or 1% of the cells in your body having broken strands of DNA. As a narrative device it's fine, as science... ambiguous, but who the gently caress knows how magic works, so not something that could be dismissed per se.

chessmaster13
Jan 10, 2015

Added Space posted:

That's clearly what is was going for, along with what was described in that John Crichton time travel novel. You come out of the process with some of your capillaries out of alignment, or 1% of the cells in your body having broken strands of DNA. As a narrative device it's fine, as science... ambiguous, but who the gently caress knows how magic works, so not something that could be dismissed per se.

I'm sure you are talking about 'Timeline' by Michael Crichton.
It's one of the books I read as a teenager and an interesting piece of science fiction.

While reading HPMOR transfiguration sickness instantly reminded me of the timetravel effects in 'Timeline".

After all, authors have to have a degree of freedom to bring the story forward.
It starts to become a problem when the author states "everything here makes sense!!".
The moment you announce this people will start the bug hunt and they will find everything you didn't want them to find.
And dismissing it will not be sufficient to calm the angry and unwashed masses.

i81icu812
Dec 5, 2006

chessmaster13 posted:

After all, authors have to have a degree of freedom to bring the story forward.
It starts to become a problem when the author states "everything here makes sense!!".
The moment you announce this people will start the bug hunt and they will find everything you didn't want them to find.
And dismissing it will not be sufficient to calm the angry and unwashed masses.

quote:

Elizer Yudkowsky posted:

All science mentioned in Methods is standard science except where otherwise specified (IIRC, the only two uses of nonstandard theories are Barbour’s timeless physics in Ch. 28 and my own timeless decision theory in Ch. 33). Wherever possible, I have mentioned standard terminology inside the book to make Googling easier. At some future point I may compile a complete list for all the scientific references in Methods, but this has not yet been done.

Yeah, pretty much. I would've been bored and dismissed this a bad fanfiction ages ago otherwise.

Though Yud never dismissed or backed down from his statement that 'all science is standard science' and therefore correct. Unless I missed something, And there is a fair bit evidence he is dead wrong now!

divabot
Jun 17, 2015

A polite little mouse!
Phil Sandifer gives HPMOR a good review here. Wonder if he's read it to the end.

edit: not Phil, but a guest reviewer called James Wylder.

divabot fucked around with this message at 22:24 on Sep 13, 2015

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
I wonder what version of HPMOR that reviewer has been reading, and if so, where I can get a hold of it. It sounds a lot like what I expected it to be - well, we all know how that turned out.

Fajita Queen
Jun 21, 2012

He probably read the first dozen chapters without analyzing it too deeply like most of us did and thought it was great.

The soul crushing reality will set in with time.

Adbot
ADBOT LOVES YOU

divabot
Jun 17, 2015

A polite little mouse!

The Shortest Path posted:

He probably read the first dozen chapters without analyzing it too deeply like most of us did and thought it was great.
The soul crushing reality will set in with time.

Ah it wasn't Phil, it was James Wylder whoever that is. Yeah, I expect so. I've commented a bit and tried not to be an embittered sneer culturist in the process.

HPMOR is quite convincing for the first 20-30 chapters! This is because a literary work can make all sorts of promises to the reader. But it's fulfilling them that turns out to be hard. And I've learnt that reading Worm fics that stall at 150k words when the writer realises just how many promises they've made.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply