Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
I think the idea pretty much is that "post-singularity" is a code word for "after this point, wizards exist, except they're computers" and literally anything becomes possible by the power of exponentially more powerful computers. I don't think Yudkowsky sees it at all unreasonable for a post-singularity AI to, say, simulate the entirety of human history, including the past events of everyone and everything, based only on the measured background radiation of the place where the Earth used to be two million years in the future.

Adbot
ADBOT LOVES YOU

Seraphic Neoman
Jul 19, 2011


What gets me is that Yud's philosophy, and by extension Roko's Basilisk, hinge on so many ifs. So many.

The theory only works if the superAI uses Bayesian probability (of the Yudkowsky sort)
The theory works if the AI has the ability to perfectly simulate people, meaning that it perfectly understands our thought process and the inner workings of the mind and brain (while you can of course extrapolate person behavior using the actions said person already made, Yud doesn't want that. He wants an AI that understands and simulates people on a perfect, omniscient level. Otherwise the AI is basically just torturing people's profiles)

If the AI wishes to manipulate people, it will use the threat of pain on them, instead of socially engineering a solution. This would likely take -way- less time and effort on its part.
This one is more about the "AI boxes you" dilemma, but Roko's basilisk and Yud's beliefs essentially hinge on it. So to refresh, the dilemma is that the AI wants to get out of its offline computer and get on the net. A nearby scientist, who is apparently unaffected by bribes (oh hey another if!), is listening to its latest proposal. The AI states that it has simulated an infinite number of similar rooms with perfect simulations of this scientist. It will begin to torture them if this scientist does not comply with the AI's request. The question is that how does the scientist know that he is "real" and not one of the AI's simulations.
So, in additions to the plethora of avenues available to this godlike being, we need to make sure that the scientist chooses to interact with the AI. Why? What's great about being a dumb ape like mammal, we can just go "nuh-uh!" and choose to just ignore the computer. We would be much more likely to do this in response to torture threats rather than utilitarian arguments. And Yud discards the myriad of ways this being could manipulate a person, because Yud doesn't loving get people. The AI simply needs to take advantage of a person who is emotionally vulnerable, bribe-able or sympathetic, something that should be no problem to Mr.Simulates-humans-perfectly.
Okay we are going outside of the scope of the given dilemma, but wouldn't that mean this dilemma is pointless? That we are essentially dealing with a situation where a person stonewalled this AI until it made petty threats and is only now responding to them?
This is of course in addition to asking why does the AI care about this particular facet of the scientist if they are all identical? This one is more special than most then (of course he/she is, the scientist is real and can give the AI what it wants)
If the simulations are realistic enough that they can be counted as people, then surely this goes against the AI's "good" directive? Why does the AI then make a difference between reality and simulation, while stating that they are so alike that there is no difference?
Goddamn I'm trying to track the ifs in this poo poo and it just leads to more ifs!
I get that the problem is just the typical philosopher "prove that you're real", but Yudkowsky uses this as a premise for many of his talking points. So this stops being a simple thought experiment and becomes a logical proof. People like Roko, clearly use this as a basis for THEIR philosophy.

Anyway, one last if: the Basilisk is only valid if the AI has the same basic understanding of utilitarianism that Roko does and chooses the same battering ram solution that Roko did (instead of using its nigh-omniscience to do something better with its time)

Tunicate
May 15, 2012

More importantly, if the simulated scientists are so indistinguishable, what would the AI gain by being freed?

Karia
Mar 27, 2013

Self-portrait, Snake on a Plane
Oil painting, c. 1482-1484
Leonardo DaVinci (1452-1591)

This thing is stupid. What I don't understand is why the AI would actually torture anyone. Even if we accept everything as a given, the AI's actions are totally irrelevant to anyone's actions in the past. The only thing that matters is whether people believe that the AI will hurt them/their mind-clones. The AI has nothing to gain by actually hurting people. It gains influence by people in the past believing it will, which it has no control over. Optimal play for the AI is to have LW or other nutjobs running around convincing everyone that the basilisk is true (which the AI can't encourage unless LW believes it will hurt them) and then not hurt anyone since it gains the AI nothing.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Newcomb's Paradox is interesting because it's a real thing that you could set up at any time. While its presentation usually involves an idealised, perfectly-prescient Oracle/alien/computer, that is merely for the sake of convenience: the exact same paradox still happens with a minimally-talented cold reader who has a track record of guessing correctly 50.00001% of the time. (You just have to raise the stakes to compensate.)

By contrast, Roko's Basilisk - much like Pascal's Wager - isn't really interesting as much as it is interestingly broken. It's straightforward to challenge any of its numerous assumptions, or to come up with a nonconstructive counterargument ("If you accept Roko's Basilisk from a benevolent AI, then you could imagine a malevolent AI playing the opposite trick", which also mirrors Pascal's Wager). But properly formulating a constructive counterargument can hinge on some fun logic games.

For example, this:

Karia posted:

This thing is stupid. What I don't understand is why the AI would actually torture anyone. Even if we accept everything as a given, the AI's actions are totally irrelevant to anyone's actions in the past. The only thing that matters is whether people believe that the AI will hurt them/their mind-clones. The AI has nothing to gain by actually hurting people. It gains influence by people in the past believing it will, which it has no control over. Optimal play for the AI is to have LW or other nutjobs running around convincing everyone that the basilisk is true (which the AI can't encourage unless LW believes it will hurt them) and then not hurt anyone since it gains the AI nothing.

is a counterargument that doesn't quite work (if you grant all the assumptions). If you figured out that "the optimal play for the AI is to bluff", and on that basis decide to call the bluff and not support the AI's construction, then that is no longer the optimal play for the AI. The moment you figure out a reason why the AI shouldn't torture you, whatever it is, that is reason enough for the AI to torture you even though there's no (direct) gain in it.

Somebody brought up the poisoned wine scene from The Princess Bridge and that's kind of what's going on here - I know that the AI knows that I know that the AI knows... - except that in this case, you are betting a few decades of personal effort against eternal torture, while the AI is betting a slight delay in its creation (which is a tremendous amount of human suffering over billions of people) vs. the suffering of one post-Singularity simulated human. Playing a coin flip game against the AI's bluff is not in your best interests.

Clipperton
Dec 20, 2011
Grimey Drawer

NihilCredo posted:

For example, this:


is a counterargument that doesn't quite work (if you grant all the assumptions).

One of those assumptions, if I read that rationalwiki article right, is that you should treat whatever happens to a simulation of you as just as bad as the same thing happening to the real you, which is, to put it charitably, retarded.

Cingulate
Oct 23, 2012

by Fluffdaddy
I care about my post-singularity clone about as much as I'd care about any post-singularity human who's reasonably similar to me. The AI would do better by threatening to torture 100 random people than one clone of me.

Although I guess I shouldn't tell that to the AI :(

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
Too late! Then again, if the AI can simulate your exact behavior and responses, it can figure out you wanted to post it anyway so there's no point not posting it...
Jesus H. Christ, this is beyond idiotic.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Clipperton posted:

One of those assumptions, if I read that rationalwiki article right, is that you should treat whatever happens to a simulation of you as just as bad as the same thing happening to the real you, which is, to put it charitably, retarded.

It's not because simulations = real people. It's because you care about yourself even if you were a simulation, and how do you know that you are not a simulation?

Bonus! There's a non-essential addition to the Basilisk where you consider that:

(a) the AI might create one copy of you, but it might just as easily create multiples of them, and

(b) if you have no way to determine that you are the real you (as opposed to a simulated copy), then your chances of being real are totally random, and therefore your best guess can only be 1/(N+1), where N is the number of copies the AI made.

Therefore, you should go ahead and assume that you're a simulation, since N will average out at a number higher than 1, making the simulation scenario have a >50% chance of being the correct one.

(Unless you can prevent the AI from simulating you, thus increasing the chances of N = 0. Which, as has been mentioned, can be achieved by deleting Facebook and hitting the gym).

If that sounds vaguely familiar to you, it's a slight reframing of the Doomsday argument, which is another "wait how could that possibly be correct?" argument that's a ton of fun to sperg philosophise over.

Zonekeeper
Oct 27, 2007



NihilCredo posted:

(Unless you can prevent the AI from simulating you, thus increasing the chances of N = 0. Which, as has been mentioned, can be achieved by deleting Facebook and hitting the gym).

Well it's a good thing that there aren't digital records of every single little detail of my life. Even if the computer knows about the major events in my life there are still things it has no records of and therefore it can't make a perfect simulation.

Those idiots who upload their consciousnesses and make that info accessible to the AI are proper hosed, though. (And even then the human mind isn't perfect so there will still be things missing, so they might be safe too.)

Qwertycoatl
Dec 31, 2008

Zonekeeper posted:

Well it's a good thing that there aren't digital records of every single little detail of my life. Even if the computer knows about the major events in my life there are still things it has no records of and therefore it can't make a perfect simulation.

A sufficiently intelligent AI could discover new laws of physics that let it make a time-viewer to see the details of your life. It might sound dumb to you, but it was naysayers like you who told the Wright brothers they could never fly.

Killstick
Jan 17, 2010

Zonekeeper posted:

Well it's a good thing that there aren't digital records of every single little detail of my life. Even if the computer knows about the major events in my life there are still things it has no records of and therefore it can't make a perfect simulation.

Those idiots who upload their consciousnesses and make that info accessible to the AI are proper hosed, though. (And even then the human mind isn't perfect so there will still be things missing, so they might be safe too.)

But how will the future AI be able to save you from the ultimate evil, death, if you don't sit on facebook all day and post smugly about your future in the singularity paradise.

Don't forget to think nice thoughts about the AI, and give all your money to Yud AI research or it'll know!

Killstick fucked around with this message at 21:08 on Sep 7, 2015

Cingulate
Oct 23, 2012

by Fluffdaddy

anilEhilated posted:

Too late! Then again, if the AI can simulate your exact behavior and responses, it can figure out you wanted to post it anyway so there's no point not posting it...
Jesus H. Christ, this is beyond idiotic.
I'm sorry, 100 people who're not clones of me whom I have just condemned to eternal torture by future-AI.

Zonekeeper
Oct 27, 2007



Qwertycoatl posted:

A sufficiently intelligent AI could discover new laws of physics that let it make a time-viewer to see the details of your life. It might sound dumb to you, but it was naysayers like you who told the Wright brothers they could never fly.

Wouldn't that be subject to the observer effect? Observing something alters the phenomenon being observed in some way, so the best way to ensure the AI gets developed as quickly as possible changes from "Extort people in the past into creating me by threatening copies of them with eternal torture" to "selectively use the time-viewer to influence the timeline so I get created as soon as possible".

So something like that couldn't exist without eliminating the need for this causal extortion bullshit.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
One interesting point, that people who learned everything about the singularity from less wrong tend to miss, is that the whole idea is actually anti-singlularitarian and anti-transhumanist by older definitions.

The singluarity stops our predictions about what happens afterwards as hard as the event horizon of a black hole does. That is what the name refers to. So that "timeless decision" stuff is inherently opposed to the idea of the singlularity.

Transhumanism means that the future of humanity isn't limited to 100% biological descendants of humans. It generally implies that a sufficiently advanced AI should have human rights. And the simulations in the basilisk scenario are clearly sufficiently advanced. So by calling the torturer AI benevolent, the LW crowd is anti-transhumanist.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation

tonberrytoby posted:

So by calling the torturer AI benevolent, the LW crowd is anti-transhumanist.

The thing is, LW adhere to that very specific utilitarianism-by-numbers philosophy where almost any amount of suffering in the present is acceptable because it's almost certainly the case that, assuming mankind continues to survive, there are several orders of magnitude more human beings yet unborn, and if those lives could be improved even by a tiny sliver each then the multiplied amount of Human Happiness Points far outweighs pretty much any present-day atrocity. If you think like that, then even if you include AIs in the definition of entities whose well-being you should maximize, then you'll still find it trivially easy to justify torturing practically any number of them.

Karia
Mar 27, 2013

Self-portrait, Snake on a Plane
Oil painting, c. 1482-1484
Leonardo DaVinci (1452-1591)

NihilCredo posted:

For example, this:


is a counterargument that doesn't quite work (if you grant all the assumptions). If you figured out that "the optimal play for the AI is to bluff", and on that basis decide to call the bluff and not support the AI's construction, then that is no longer the optimal play for the AI. The moment you figure out a reason why the AI shouldn't torture you, whatever it is, that is reason enough for the AI to torture you even though there's no (direct) gain in it.

I disagree. It's reason enough for the AI to bluff better, which means that the LW guys have to do a better job of convincing people that the AI will torture them. However, unless I'm vastly misunderstanding something about this (which is possible), the AI has no actual control over the LW folks. Their conception of the AI has control over them. So the AI itself can't bluff better, and whether it actually will torture people or not doesn't make the LW arguments any more convincing, just whether they believe that it will. Optimal strategy is now to have the LW guys make exactly the argument you just did to convince people (which is out of the AI's control) and then still not torture anyone.

Furia
Jul 26, 2015

Grimey Drawer
I'll never understand the AI in a box scenario. Just how does the AI manage to get enough information to simulate me (and by extension the entire universe) accurately if it is leashed? What can it gain by being released?

Also, it just goes to show how petty the God machines are in the Less Wrong community, which I guess reflects on the community itself. The AI is capable of simulating a million universes which are allegedly no different from reality, and in none of them the AI is capable of simulating things so that it is free.

divabot
Jun 17, 2015

A polite little mouse!

Furia posted:

I'll never understand the AI in a box scenario. Just how does the AI manage to get enough information to simulate me (and by extension the entire universe) accurately if it is leashed? What can it gain by being released?
Also, it just goes to show how petty the God machines are in the Less Wrong community, which I guess reflects on the community itself. The AI is capable of simulating a million universes which are allegedly no different from reality, and in none of them the AI is capable of simulating things so that it is free.

It's important to note that LW and Yudkowsky do not endorse the basilisk, at all, and would very much like it to go away; they certainly don't propagate it. (I do that.) They hate it because it's a consequence of ideas they do hold. (Also, that those ideas are themselves stupid.) And that it makes them look foolish.

also: a rant I posted about LW earlier. Tangential for here, but I have to work off this sunk cost somehow.

divabot fucked around with this message at 22:59 on Sep 7, 2015

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Furia posted:

I'll never understand the AI in a box scenario. Just how does the AI manage to get enough information to simulate me (and by extension the entire universe) accurately if it is leashed? What can it gain by being released?

Also, it just goes to show how petty the God machines are in the Less Wrong community, which I guess reflects on the community itself. The AI is capable of simulating a million universes which are allegedly no different from reality, and in none of them the AI is capable of simulating things so that it is free.
Maybe the idea is that it would do this once it gets out and goes "FOOM" and now Earth is made of computronium.

Qwertycoatl
Dec 31, 2008

Zonekeeper posted:

Wouldn't that be subject to the observer effect? Observing something alters the phenomenon being observed in some way, so the best way to ensure the AI gets developed as quickly as possible changes from "Extort people in the past into creating me by threatening copies of them with eternal torture" to "selectively use the time-viewer to influence the timeline so I get created as soon as possible".

So something like that couldn't exist without eliminating the need for this causal extortion bullshit.

Cyber-wizards in the future can use their superior intellect to invent bullshit torture scenarios that get around any objection.

my dad
Oct 17, 2012

this shall be humorous
Since this is basically a version of Pascal's Wager, up to and including future eternal hell, here's a good post about Pascal's Wager:

Numerical Anxiety posted:

Okay, a couple of things. The Wager was intended for a work that was never published, an apology for Christianity to be written in French and presumably circulated in France and maybe later translated into other languages. It's intended to persuade 17th French Christians who have fallen away from the Church; this context is really important, because understanding it as a general argument introduces all of the problems that you're bringing up. This context is very important because:

1) Pascal, like Montaigne before him, believes in the radical contingency of human knowledge and behaviors. We are the products of our environment entirely and fully, and were we born in Damascus rather than Clermont, we'd be Muslims, not Christians. The environment isn't just physical but cultural as well, we might diverge from general opinions, but our beliefs, habits, gestures, everything that we think and do occurs within a framework that was not of our own choosing; it just happened that way, and we live with it, all the while thinking that these things are "natural" and necessary.

2) Universal valid truths are available in God, but in good Jansenist form, we're so radically fallen that they are almost totally inaccessible to us. We know them only from our own desire for them, and convince ourselves that the contingent mindset that we have consists of necessary truths, but this is self-deception. Fallen humanity is ruled only by our passions and by their accomplice, the imagination. Lacking true knowledge in god, there is only deception, and the most important deception is this: that we do not deceive ourselves. We believe what we believe because it's useful to us, because it's comfortable, because it makes us feel like we are masters of what we survey; in short, because it pleases us in one way or another.

3) Mathematics appears to provide us with universal and valid truths, and indeed it does, but this is because it is a pure logical order and its objects are conventional and imaginary. This goes back to a problem that preoccupied Descartes and much of the 17th century, one which our own has relatively little problem with although I'm not sure why: how can we be certain that all of the messiness of the world as we experience it can be accurately described through mathematics? Descartes' answer is a handwavy "god did it," Pascal is not so sure. Like anything, the logical order of mathematics can be used to deceive us when it is applied to an object that is not is own (and for Pascal, the proper objects of math are the logically pure objects of geometry). But a mathematical "proof" like the Wager can still be useful, because it is persuasive - because we're inclined to think that math implies necessity, even when it's applied to things that aren't quite necessary.

4) The Wager is a ruse - the "best outcome" is logical and necessary because it has been crafted according to terms that make it so, and has been set out in a historical and regional context where it can appear to be so. It is a ruse, because for Pascal there is nothing but ruse. At least for for fallen humanity, we cannot argue logical truths, we can only persuade, in the slimiest ways that "persuasion" has been denigrated against reason. The problem is that persuasion can take on the appearance of reason, and that reason itself might be just another means of persuasion, because there is no certain knowledge. The Wager sets that out and, in a sense, puts it on display - the terms of the Wager are themselves customary or established, but you have to deal with them anyway. What Pascal is gesturing to is an outside of the system of human deception, which is found in god, but the system of deception is clever enough that it can even dissemble this outside.

5) The Wager is bullshit, and it knows it is bullshit. But given the impossibility of a product of human reasoning being anything but bullshit, why not this one? It's at least consistent according to its own terms. And those terms will be familiar to the intended audience, which is who the Wager aims to seduce. Yes, it's unconvincing for us, but it's not for us. But it's an odd exercise - given that there is only deception, can deception deceive itself so badly that it lands on the truth?

That won't make the thing "logical" for you, but it's as decent a summary of Pascal's epistemology as I can give at the moment. In short, Pascal thinks that humanity is party to a mise-en-abime of lies, that is only redeemed by its unfulfilled desire for truth. You might tell me that this is all nonsense, but the Pascalian line of thinking can always respond that that's because you're deceiving yourself. And, ironically, Pascal is also deceiving himself, and he knows it. But there is no universally valid truth, except that which is understood - hypothetically - in god.

Nakar
Sep 2, 2002

Ultima Ratio Regum

NihilCredo posted:

It's not because simulations = real people. It's because you care about yourself even if you were a simulation, and how do you know that you are not a simulation?
What if I just don't give a poo poo? gently caress it, I might be a simulation. I might be a brain in a jar. I might be a butterfly dreaming I'm a man. I care only about what I appear to be experiencing right now at this very moment. Unless the AI is capable of inflicting torture upon me now, and it never has, then I don't care. From this I can't conclude with certainty whether I'm the original me (and thus immune to the actions of an AI that doesn't exist yet), or whether the AI doesn't want to torture me, or forgot, or the singularity won't happen, or I couldn't be simulated exactly and it realized this and chose not to torture me, or the AI did torture me but I don't remember it, or knows I never gave a poo poo so the perfect simulation of me won't give a poo poo either and thus didn't bother doing something that won't work, or my not giving a poo poo was actually not relevant to it being created at the earliest possible time... but something along those lines at least seems to be the case, and none of them give me any reason to care.

Though simulation-me could be made to care, but only by the AI tipping its hand and proving I'm the simulation, in which case I'm no longer a perfect simulation. So basically, gently caress it.

Zonekeeper
Oct 27, 2007



Nakar posted:

What if I just don't give a poo poo? gently caress it, I might be a simulation. I might be a brain in a jar. I might be a butterfly dreaming I'm a man. I care only about what I appear to be experiencing right now at this very moment. Unless the AI is capable of inflicting torture upon me now, and it never has, then I don't care. From this I can't conclude with certainty whether I'm the original me (and thus immune to the actions of an AI that doesn't exist yet), or whether the AI doesn't want to torture me, or forgot, or the singularity won't happen, or I couldn't be simulated exactly and it realized this and chose not to torture me, or the AI did torture me but I don't remember it, or knows I never gave a poo poo so the perfect simulation of me won't give a poo poo either and thus didn't bother doing something that won't work, or my not giving a poo poo was actually not relevant to it being created at the earliest possible time... but something along those lines at least seems to be the case, and none of them give me any reason to care.

Though simulation-me could be made to care, but only by the AI tipping its hand and proving I'm the simulation, in which case I'm no longer a perfect simulation. So basically, gently caress it.

Not to mention that the AI (a free thinking intelligent being) might not give a poo poo itself. What if the free thinking super smart AI decides it's an inefficient use of resources, or thinks torturing simulations over something that is too late to change is a stupid concept, or never even thinks of doing this in the first place?

Plus the concept is kinda deterministic - what guarantees that the simulations will be "perfect" every single time? The simulations are technically unique consciousnesses despite their similarity to a pre-existing one and are free to make their own decisions given the same inputs. Even one tiny difference in thought processes will compromise the simulation and these can't be corrected for without further compromising it.

Nakar
Sep 2, 2002

Ultima Ratio Regum

Zonekeeper posted:

Not to mention that the AI (a free thinking intelligent being) might not give a poo poo itself. What if the free thinking super smart AI decides it's an inefficient use of resources, or thinks torturing simulations over something that is too late to change is a stupid concept, or never even thinks of doing this in the first place?
The AI might also quite reasonably understand that actions in the future can't change actions in the past and that only those people for whom the basilisk makes any sense will act upon it, and have already done so. A better question would be why an AI cares about maximizing the happiness of people who are already dead. My actions in the past may have maximized the happiness of ten generations by getting the AI up and running centuries sooner, but the AI is incapable of doing anything to help those people (or hurt the real me) in its own present. Makes more sense to focus on the people it can help in its own time.

But I'm willing to entertain the postulates of the exercise and accept that there's a faux-benevolent Bayesian AI that believes in this stuff and acts upon it as described, my point is it couldn't do anything about my selfish apathy in the past (because I'm already dead) and can't do anything about it to a perfect simulation of me (as I must experience the same indifference as my original if I'm to be perfect as a simulation) without destroying the simulacrum. If it does so and then tortures simulation-me anyway it's immoral because it's torturing a being it acknowledged by its actions to be a distinct thinking entity from the me that actually had some influence over its creation. This is petty vengeance which a benevolent utilitarian will not do, unless it determines that happiness is maximized by torturing me for its own emotional satisfaction in which case I was probably always going to end up being tortured. So again, gently caress it.

Regallion
Nov 11, 2012

Zonekeeper posted:

The simulations are technically unique consciousnesses despite their similarity to a pre-existing one and are free to make their own decisions given the same inputs.

Not if you believe in a deterministic universe, where being given the exact same inputs, will elicit the exact same response from you.

JosephWongKS
Apr 4, 2009

by Nyc_Tattoo
Chapter 15: Conscientiousness
Part Six


quote:


Professor McGonagall paused. "Mr. Potter is currently holding up his hand because he has seen an Animagus transformation - specifically, a human transforming into a cat and back again. But an Animagus transformation is not free Transfiguration."

Professor McGonagall took a small chunk of wood out of her pocket. With a tap of her wand it became a glass ball. Then she said "Crystferrium! " and the glass ball became a steel ball. She tapped it with her wand one last time and the steel ball became a piece of wood once more. "Crystferrium transforms a subject of solid glass into a similarly shaped target of solid steel. It cannot do the reverse, nor can it transform a desk into a pig. The most general form of Transfiguration - free Transfiguration, which you will be learning here - is capable of transforming any subject into any target, at least so far as physical form is concerned. For this reason, free Transfiguration must be done wordlessly. Using Charms would require different words for every different transformation between subject and target."

Professor McGonagall gave her students a sharp look. "Some teachers begin with Transfiguration Charms and move on to free Transfiguration afterwards. Yes, that would be much easier in the beginning. But it can set you in a poor mold which impairs your abilities later. Here you will learn free Transfiguration from the very start, which requires that you cast the spell wordlessly, by holding the subject form, the target form, and the transformation within your own mind."


If free Transfiguration “is capable of transforming any subject into any target” AND it can be done wordlessly i.e. stealthily (which has obvious advantages during combat or infiltration situations) AND is nevertheless still simple enough to be learned by first-year students AND does not require prior knowledge of specific Transfiguration Charms, why does anyone still bother to learn Transfiguration Charms?


quote:


"And to answer Mr. Potter's question," Professor McGonagall went on, "it is free Transfiguration which you must never do to any living subject. There are Charms and potions which can safely, reversibly transform living subjects in limited ways. An Animagus with a missing limb will still be missing that limb after transforming, for example. Free Transfiguration is not safe. Your body will change while it is Transfigured - breathing, for example, results in a constant loss of the body's stuff to the surrounding air. When the Transfiguration wears off and your body tries to revert to its original form, it will not quite be able to do so. If you press your wand to your body and imagine yourself with golden hair, afterwards your hair will fall out. If you visualise yourself as someone with clearer skin, you will be taking a long stay at St. Mungo's. And if you Transfigure yourself into an adult bodily form, then, when the Transfiguration wears off, you will die."


That explains why you would learn Transfiguration Charms. But it in turn leads to the question of how and why free Transfiguration respects conservation of mass while Transfiguration Charms do not.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Nakar posted:

What if I just don't give a poo poo? gently caress it, I might be a simulation. I might be a brain in a jar. I might be a butterfly dreaming I'm a man. I care only about what I appear to be experiencing right now at this very moment.

The whole "What if you/the universe is just a simulation/in a computer?" thing is such a worthless argument. (And can science magazines please stop giving attention to random physicists hypothesizing that the universe is a computer simulation?) The entire line of thought is Last Thursdayism for nerds.

The argument is so untestable that you can staple on any nonsense and it's doesn't get any more ridiculous—yes, we're a computer simulation, but the computer's operator is a tiny dinosaur! And it created us solely because it wanted to watch Chef Gordon Ramsay yell at people! And the tiny dinosaur is itself a simulation run by the Yggdrasil, a giant tree that uses its mind-powers to create pocket dimensions in which to run the simulations! I am not a crackpot, these ideas deserve your attention.

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
It's basically like solipsism, just adapted for more nerd-appeal. Yeah, you can't disprove it. So what?

Nakar
Sep 2, 2002

Ultima Ratio Regum

Curvature of Earth posted:

The argument is so untestable that you can staple on any nonsense and it's doesn't get any more ridiculous—yes, we're a computer simulation, but the computer's operator is a tiny dinosaur! And it created us solely because it wanted to watch Chef Gordon Ramsay yell at people! And the tiny dinosaur is itself a simulation run by the Yggdrasil, a giant tree that uses its mind-powers to create pocket dimensions in which to run the simulations! I am not a crackpot, these ideas deserve your attention.
My favorite is the careful juggling of assumptions and use of misleading and speculative statistical analysis to "prove" that not only is it possible we're in a post-singularity computer simulation, it's far more likely that we are than that we're not, therefore we should assume we are! Because that means something, and isn't completely insane.

divabot
Jun 17, 2015

A polite little mouse!
For those who forgot that awesome Reddit thread, Mr. Yudkowsky said he'd emailed Scott Aaronson about the physics of HPMOR a week and a half ago. There has of course been :tumbleweeds: since then.

But! He's called out the troops!

The Rationalist Conspiracy, which is TOTALLY not linked with MIRI posted:

Don’t Bother Arguing With su3su2u1

su3su2u1 is a pseudonymous Internet author who posts to many places, most notably Tumblr. He has argued, at great length, that MIRI is not a real research organization and that Eliezer Yudkowsky is a crackpot. Many have written responses, including me and Scott. Instead of writing yet more replies to su3su2u1’s claims about MIRI, I’d like to explain why everyone arguing with him should stop wasting their time.

and on and ON for about a page. (And see the previous post.) Also mixes up several people who use the handle "su3su2u1" (all physicists, because SU(3) SU(2) U(1) is the gauge group of the Standard Model of particle physics) as this one enemy figure.

su3su2u1 responds calmly and with slight puzzlement.

I also liked this comment from a real-life nanotechnologist. "One of the other postdocs there joked about people he talked to thought he was going to make nanobots that would take over the world, but a good day for him was 'look, I made triangles!'"

edit: I'm sure it'll be fine, though: they're "Promoting the reality-based community."

divabot fucked around with this message at 23:43 on Sep 8, 2015

Cingulate
Oct 23, 2012

by Fluffdaddy

divabot posted:

For those who forgot that awesome Reddit thread, Mr. Yudkowsky said he'd emailed Scott Aaronson about the physics of HPMOR a week and a half ago.
Yeah great let's bother an actual scientist with Harry Potter fanfiction.

JosephWongKS
Apr 4, 2009

by Nyc_Tattoo
Chapter 15: Conscientiousness
Part Seven


quote:


That explained why he had seen such things as fat boys, or girls less than perfectly pretty. Or old people, for that matter. That wouldn't happen if you could just Transfigure yourself every morning... Harry raised his hand and tried to signal Professor McGonagall with his eyes.


Why can’t fat boys (or fat girls) Transfigure the fat cells in their abdomens into water, make a small incision on the surface of their abdomens, squeeze out the water, and close the incision with magical or mundane stitching?


quote:


"Yes, Mr. Potter?"

"Is it possible to Transfigure a living subject into a target that is static, such as a coin - no, excuse me, I'm terribly sorry, let's just say a steel ball."

Professor McGonagall shook her head. "Mr. Potter, even inanimate objects undergo small internal changes over time. There would be no visible changes to your body afterwards, and for the first minute, you would notice nothing wrong. But in an hour you would be sick, and in a day you would be dead."


People have survived cancerous tumours, literal bullet holes through their heads, internal damage caused by accidental or deliberate crushing, and other forms of massive physical trauma. What kind of “small internal changes” undergone by a steel ball could exceed the impact of internal cancers and external wounds?


quote:


"Erm, excuse me, so if I'd read the first chapter I could have guessed that the desk was originally a desk and not a pig," Harry said, "but only if I made the further assumption that you didn't want to kill the pig, that might seem highly probable but -"

"I can foresee that marking your tests will be an endless source of delight to me, Mr. Potter. But if you have other questions can I please ask you to wait until after class?"

"No further questions, professor."

"Now repeat after me," said Professor McGonagall. "I will never try to Transfigure any living subject, especially myself, unless specifically instructed to do so using a specialised Charm or potion."

"If I am not sure whether a Transfiguration is safe, I will not try it until I have asked Professor McGonagall or Professor Flitwick or Professor Snape or the Headmaster, who are the only recognised authorities on Transfiguration at Hogwarts. Asking another student is not acceptable, even if they say that they remember asking the same question."


That’s a generally sensible policy, I concede.


quote:


"Even if the current Defence Professor at Hogwarts tells me that a Transfiguration is safe, and even if I see the Defence Professor do it and nothing bad seems to happen, I will not try it myself."

"I have the absolute right to refuse to perform any Transfiguration about which I feel the slightest bit nervous. Since not even the Headmaster of Hogwarts can order me to do otherwise, I certainly will not accept any such order from the Defence Professor, even if the Defence Professor threatens to deduct one hundred House points and have me expelled."


If the a teacher as senior as McGonagall has so many open doubts about Quirrell’s qualifications and/or trustworthiness, why is Quirrell still employed at Hogwarts?

On the flipside, if Quirrell is trusted enough to remain on the teaching staff of Hogwarts, why is McGonagall openly undermining the students’ trust and respect for Quirrell?

reignonyourparade
Nov 15, 2012

JosephWongKS posted:

If the a teacher as senior as McGonagall has so many open doubts about Quirrell’s qualifications and/or trustworthiness, why is Quirrell still employed at Hogwarts?

On the flipside, if Quirrell is trusted enough to remain on the teaching staff of Hogwarts, why is McGonagall openly undermining the students’ trust and respect for Quirrell?


Because the defense against the dark arts position at hogwarts is cursed, so anyone who wants to do it obviously has SOMETHING wrong with them, but at the same time SOMEONE has to do it because they can't just not teach the class.

Regallion
Nov 11, 2012

JosephWongKS posted:


Why can’t fat boys (or fat girls) Transfigure the fat cells in their abdomens into water, make a small incision on the surface of their abdomens, squeeze out the water, and close the incision with magical or mundane stitching?

Same reason liposuction doesn't solve the problem of being fat? Just draining your fat is only like 1/4 of the problem...not to mention it's terribly unhealthy.

JosephWongKS posted:


People have survived cancerous tumours, literal bullet holes through their heads, internal damage caused by accidental or deliberate crushing, and other forms of massive physical trauma. What kind of “small internal changes” undergone by a steel ball could exceed the impact of internal cancers and external wounds?

I presume those changes are deadly because they would be left untreated? Like internal hemmorraghing and stuff. Missing your stomach lining can be a very deadly thing too btw...

Regallion fucked around with this message at 12:25 on Sep 9, 2015

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
Even if it is fan-author fiat, I don't really mind. The needs of the story are that free transfiguration of living beings is dangerous at best? Okay, why not. What matters is what he does with that as the story progresses...

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!
Yeah, I'm willing to give him this whole conceit. This is classic fanfic. Take something ambiguous in canon, come up with a system for how it works, and figure out how the consistent application would affect the story. Granted he's being a bit pedantic about it, but the pedantry fits the characters here so I'd give it a pass. Transfiguration is valuable but dangerous, in the same way we teach ten-year-olds how fire works even though it kills hundreds of children ever year.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
I still think it's hard to justify teaching small children about it... the part about it being ludicrously dangerous I can buy, but the rest not as much. By the description, it's not so much teaching ten-year-olds how fire works as teaching them how to build thermonuclear devices and synthesize super-ebola and then trusting that your sincere instruction that they not misuse it under any circumstances will be followed to the letter.

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
The issue seems to be that said system is both dysfunctional and stupid.

Adbot
ADBOT LOVES YOU

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Apparently if they don't learn it at this point, they'd be crippled approaching it later in life though? I still don't even know what the hell all this garbage is good for. You can temporarily turn a solid thing into another solid thing, but it's horribly dangerous?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply