Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Yeah, exactly. Infinite regression seems to be pretty unavoidable to me in any situation where the results of the prediction have any kind of influence on the subject of the prediction. There just isn't such a thing as the specific outcome of only one single decision.

Adbot
ADBOT LOVES YOU

SubG
Aug 19, 2004

It's a hard world for little things.

Spoilers Below posted:

It is if you're going to predict how you'd predict the prediction of a prediction that's predicted by you predicting the prediction of a prediction of a... ad infinitum.
Nah. I mean it's possible that this is some sort of special edge-case problem, but in general it seems that people's behaviour in typical game theory tests is something that's amenable to things like factor analysis. People just tend to overestimate the complexity of their deliberations.

Not quite the same thing, but if you look at for example OkCupid's analyses of their user data there's all kinds of simple factors that predict what we would imagine to be elaborately complicated situations. Like their three questions (`Do you like horror movies?', `Have you ever traveled around another country alone?', `Wouldn't it be fun to chuck it all and go live on a sailboat?') that are good predictors of the long-term compatibility of couples. That's not to say that they're perfect predictors. Or that I can come up with a similar set of criteria for the Newcomb box off the top of my head. My point's just that even if we imagine that a particular decision is based on an almost impossibly complicated set of contingencies, conditionals, and subtle personal inflections (like whether or not we want a long-term relationship with a particular person), that doesn't mean that the outcome can't be modeled by simpler means.

Put in slightly different terms: our internal models for our own decisions appear to contain an awful lot of redundancies.

Pf. Hikikomoriarty
Feb 15, 2003

RO YNSHO


Slippery Tilde
http://lesswrong.com/lw/q9/the_failures_of_eld_science/

quote:

Jeffreyssai repeated. "So, Brennan, how long do you think it should take to solve a major scientific problem, if you are not wasting any time?"

Now there was a trapped question if Brennan had ever heard one. There was no way to guess what time period Jeffreyssai had in mind—what the sensei would consider too long, or too short. Which meant that the only way out was to just try for the genuine truth; this would offer him the defense of honesty, little defense though it was. "One year, sensei?"

"Do you think it could be done in one month, Brennan? In a case, let us stipulate, where in principle you already have enough experimental evidence to determine an answer, but not so much experimental evidence that you can afford to make errors in interpreting it."

Again, no way to guess which answer Jeffreyssai might want... "One month seems like an unrealistically short time to me, sensei."

"A short time?" Jeffreyssai said incredulously. "How many minutes in thirty days? Hiriwa?"

So is Yudkowsky suggesting that all major theoretical scientific breakthroughs should take less than a month because if so holy lol.

Toph Bei Fong
Feb 29, 2008



SubG posted:

Nah. I mean it's possible that this is some sort of special edge-case problem, but in general it seems that people's behaviour in typical game theory tests is something that's amenable to things like factor analysis. People just tend to overestimate the complexity of their deliberations.

Not quite the same thing, but if you look at for example OkCupid's analyses of their user data there's all kinds of simple factors that predict what we would imagine to be elaborately complicated situations. Like their three questions (`Do you like horror movies?', `Have you ever traveled around another country alone?', `Wouldn't it be fun to chuck it all and go live on a sailboat?') that are good predictors of the long-term compatibility of couples. That's not to say that they're perfect predictors. Or that I can come up with a similar set of criteria for the Newcomb box off the top of my head. My point's just that even if we imagine that a particular decision is based on an almost impossibly complicated set of contingencies, conditionals, and subtle personal inflections (like whether or not we want a long-term relationship with a particular person), that doesn't mean that the outcome can't be modeled by simpler means.

Put in slightly different terms: our internal models for our own decisions appear to contain an awful lot of redundancies.

Sure, sure. I agree with you completely there. You are totally not wrong.

The problem is that Yudkowsky is proposing a super computer that can model human behavior with 100% accuracy, including all future activity, based on past behavior. Such that it would always know if it was dealing with a "moral" person before they made the decision to take the money from the box, even if they were to suddenly get blazed right afterwards and act completely out of character and do something that they would ordinarily never do (because it would have somehow known beforehand that you were going to toke up after you got the money, and not put the extra money in the box). It would also somehow account for you less moral friend insisting on tagging along that day even though neither of you had planned on it, and who would take money from both boxes, even though you yourself would only take from one.

Which is pretty different from what you're talking about.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
How do you think Yudkowsky feels about the possibility of, i.e., a second supercomputer that's even smarter than the first supercomputer and knows exactly what the first supercomputer predicts it will do, but always does the opposite?

So, if the first supercomputer thinks it will take both boxes, it only takes one, and if the first supercomputer thinks it will take one box, it will take both. And suppose the first supercomputer knows that it knows as much as it knows. How does the first supercomputer know what to say? Clearly in all cases it can't absolutely determine what the second one will do.

(This is pretty similar to the halting problem.)

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Mort posted:

http://lesswrong.com/lw/q9/the_failures_of_eld_science/


So is Yudkowsky suggesting that all major theoretical scientific breakthroughs should take less than a month because if so holy lol.

He's suggesting that the scientific method is a bad method of inquiry because it is too slow and the vast majority of the work in solving a problem comes from deciding on a hypothesis from the infinity that is "idea space".

This is a huge theme throughout everything he writes, because he is a quintessential ideas guy who wants to believe in his own importance.

SubG
Aug 19, 2004

It's a hard world for little things.

Spoilers Below posted:

The problem is that Yudkowsky is proposing a super computer that can model human behavior with 100% accuracy, including all future activity, based on past behavior. Such that it would always know if it was dealing with a "moral" person before they made the decision to take the money from the box, even if they were to suddenly get blazed right afterwards and act completely out of character and do something that they would ordinarily never do (because it would have somehow known beforehand that you were going to toke up after you got the money, and not put the extra money in the box). It would also somehow account for you less moral friend insisting on tagging along that day even though neither of you had planned on it, and who would take money from both boxes, even though you yourself would only take from one.

Which is pretty different from what you're talking about.
That's also different from what Cardiovorax was saying, which is what I was responding to. That is, his claim that you have to model some sort of infinitely recursive thing in order to predict a person's A/B decision. I'm confident that this is not true because it does not appear to be necessary for the person making the decision to do some sort of infinite recursion gymnastics in order to make the decision in the first place. Because people don't, as a general matter, start going `error, error' and smoking out the ears when you present them with such a problem, like some kind of robot out of Star Trek.

And even if we believe that all of a person's moral complexities influence the outcome of a Newcomb box choice, that doesn't mean that we have to model all of those complexities. Unless none of it is redundant...but it almost certainly is. Like if you wanted to predict how a person will decide some theoretically complicated moral, social, or political question you're probably most of the way there if you ask them their zip code, education, and income.

Toph Bei Fong
Feb 29, 2008



SubG posted:

That's also different from what Cardiovorax was saying, which is what I was responding to. That is, his claim that you have to model some sort of infinitely recursive thing in order to predict a person's A/B decision. I'm confident that this is not true because it does not appear to be necessary for the person making the decision to do some sort of infinite recursion gymnastics in order to make the decision in the first place. Because people don't, as a general matter, start going `error, error' and smoking out the ears when you present them with such a problem, like some kind of robot out of Star Trek.

And even if we believe that all of a person's moral complexities influence the outcome of a Newcomb box choice, that doesn't mean that we have to model all of those complexities. Unless none of it is redundant...but it almost certainly is. Like if you wanted to predict how a person will decide some theoretically complicated moral, social, or political question you're probably most of the way there if you ask them their zip code, education, and income.

Sure.

But none of this can cause the money to magically wink out of existence, per the rules of the game. Once the problem is set up, without the use of trap doors, legerdemain, teleporters, some concealed thermite and a video camera, etc. there is no way for the AI to retroactively change its mind if you do. By definition it cannot be 100% accurate and cannot account for all the various circumstances that could occur which might cause the other box to be taken without the agent actively choosing to take it.

Comedy forum answer: Always choose B, and then break the computer if it doesn't contain $1,000,000 because it was wrong and bad at predicting, preferably using the empty box as a warning to any future iterations of the computer. The computer will not commit suicide because it knows the third law of robotics, so box B should always be full. Then take A also, because hey, free extra money. Or wear x-ray goggles and inspect box B beforehand, because seriously, we've got a computer that can model behavior this accurately, and we can't get some x-ray specs?

Toph Bei Fong fucked around with this message at 08:05 on Oct 29, 2014

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
You seem to have misunderstood me there. I didn't say that a person can't make decisions based on a prediction, I said that you can't make an accurate prediction when the point of the prediction is to influence the outcome of the predicted event. It's computationally impossible. Event E is predicted by prediction P. The actor influences event E and it becomes event E', which makes prediction P invalid and requires a new prediction P'. The actor influences event E' and it becomes event E'', which makes... etc. etc. That's a textbook endless loop. It just can't be resolved.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

Spoilers Below posted:

By definition it cannot be 100% accurate and cannot account for all the various circumstances that could occur which might cause the other box to be taken without the agent actively choosing to take it.

Actually, by definition the AI is 100% accurate in that thought experiment. That's the whole point.

It just so happens that it's impossible to make such an AI in real life.

Toph Bei Fong
Feb 29, 2008



RPATDO_LAMD posted:

Actually, by definition the AI is 100% accurate in that thought experiment. That's the whole point.

It just so happens that it's impossible to make such an AI in real life.

Ah yes, real life. Bane of thought experiments, solver of paradoxes, frustrater of "idea guys" :v:

Evrart Claire
Jan 11, 2008

Krotera posted:

How do you think Yudkowsky feels about the possibility of, i.e., a second supercomputer that's even smarter than the first supercomputer and knows exactly what the first supercomputer predicts it will do, but always does the opposite?

So, if the first supercomputer thinks it will take both boxes, it only takes one, and if the first supercomputer thinks it will take one box, it will take both. And suppose the first supercomputer knows that it knows as much as it knows. How does the first supercomputer know what to say? Clearly in all cases it can't absolutely determine what the second one will do.

(This is pretty similar to the halting problem.)

This sort of idea combined with Yudkowsky's belief that 0 probability of any event can't exist kind of wreck the premise of that TDT Problem.

The problem is based on the decision process of the tester and subject, if the subject is also capable of these perfect predictions and decides to make his decision making so the situation turns out as:
Tester: Put $100 in A if Subject will open only A
Subject: Take only A only if Tester put nothing in A

This ends up with a paradox where it's impossible for both predictions to be correct.

Luigi's Discount Porn Bin
Jul 19, 2000


Oven Wrangler

Mort posted:

http://lesswrong.com/lw/q9/the_failures_of_eld_science/


So is Yudkowsky suggesting that all major theoretical scientific breakthroughs should take less than a month because if so holy lol.
Naturally, the time needed for a scientific breakthrough pales in comparison to the time needed to finish writing a smug Harry Potter fanfic (4 and a half years and counting).

Chamale
Jul 11, 2010

I'm helping!



The idea of a perfect AI taking control of the world by simply communicating with people reminds me of some concepts in tool-assisted speedruns. When a player is limited to only pressing buttons connected to a video game, it seems implausible that you could do much outside of the limitations of the game. But with a careful understanding of the underlying code, it's possible to make a bot that uploads custom code through the controller port by manipulating Mario's actions until it finds a hole in the code. So the idea is that a perfect AI would understand the exact sequence of communication required to convince a person of anything, whatever those inputs are, and exploit that to bend everyone to its will. If you presuppose a perfectly intelligent AI, the idea that people are gullible enough to obey it is not the implausible part. The handwaving all comes earlier, when Yud argues that a perfect AI can arise by accident.

SubG
Aug 19, 2004

It's a hard world for little things.

Cardiovorax posted:

You seem to have misunderstood me there. I didn't say that a person can't make decisions based on a prediction, I said that you can't make an accurate prediction when the point of the prediction is to influence the outcome of the predicted event. It's computationally impossible. Event E is predicted by prediction P. The actor influences event E and it becomes event E', which makes prediction P invalid and requires a new prediction P'. The actor influences event E' and it becomes event E'', which makes... etc. etc. That's a textbook endless loop. It just can't be resolved.
This is only true of both sides are doing beep boop infinite recursion like Star Trek robots about to explode. But humans aren't like that, so the AI doesn't have to dive down a recursive bunny hole because it just has to predict the human's behaviour, and the human's behaviour isn't derived by an infinite loop of `...but he knows I know he knows...' nonsense. Because it can't be.

It's worth pointing out that this is actually a different proposition than your original statement to which I objected:

Cardiovorax posted:

Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself.
This might be true for a general model predicting arbitrary decisions, but not for the simpler case of predicting the outcome of an individual decision. Unless the problem in question is pathological for some reason such that resolving it involves literally every piece of information and skill and memory and so on the individual possesses, and the native process used to resolve it is somehow or other provably optimal for it. But it strikes me as unlikely that any problem is that involved, much less a loving Newcomb box choice.

Spoilers Below posted:

By definition it cannot be 100% accurate and cannot account for all the various circumstances that could occur which might cause the other box to be taken without the agent actively choosing to take it.
If you mean that the human could just flip a truly random coin or something sure. But that's not even an interesting quibble in the context of the problem.

And I'm not sure what you mean `by definition' here. There's nothing saying that a person's decision schema can't be as simple as always choosing A if it's Tuesday and B otherwise. I mean I'm not arguing that it's necessarily that simple (and I'm pretty sure it isn't) but the idea isn't incoherent or a contradiction in terms or whatever.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

SubG posted:

This is only true of both sides are doing beep boop infinite recursion like Star Trek robots about to explode. But humans aren't like that, so the AI doesn't have to dive down a recursive bunny hole because it just has to predict the human's behaviour, and the human's behaviour isn't derived by an infinite loop of `...but he knows I know he knows...' nonsense. Because it can't be.
The point is that it isn't actually about anyone's behaviour, the mere fact that the contents of the predicting entity's mind change require a repeat of the prediction process with the new data set to remain perfectly accurate. I know this doesn't make a lot of intuitive sense, but since this is a formal thought experiment, it doesn't have to. It makes sense in terms of computational theory.

SubG posted:

It's worth pointing out that this is actually a different proposition than your original statement to which I objected:

This might be true for a general model predicting arbitrary decisions, but not for the simpler case of predicting the outcome of an individual decision. Unless the problem in question is pathological for some reason such that resolving it involves literally every piece of information and skill and memory and so on the individual possesses, and the native process used to resolve it is somehow or other provably optimal for it. But it strikes me as unlikely that any problem is that involved, much less a loving Newcomb box choice.
That is precisely how Yudkowsky proposes his AI god will make its predictions. It's also what would be necessary to make genuinely 100% accurate predictions of behaviour, because you need to be able to simulate with perfect every-single-loving-quark fidelity for that, but that doesn't really matter for most practical purposes, which you are of course right about.

Boing
Jul 12, 2005

trapped in custom title factory, send help

Chamale posted:

The idea of a perfect AI taking control of the world by simply communicating with people reminds me of some concepts in tool-assisted speedruns. When a player is limited to only pressing buttons connected to a video game, it seems implausible that you could do much outside of the limitations of the game. But with a careful understanding of the underlying code, it's possible to make a bot that uploads custom code through the controller port by manipulating Mario's actions until it finds a hole in the code. So the idea is that a perfect AI would understand the exact sequence of communication required to convince a person of anything, whatever those inputs are, and exploit that to bend everyone to its will.

But only if such a sequence exists, which it probably doesn't even if you do suppose a perfect omniscient AI larger than the universe that doesn't give a poo poo about chaos theory. The human brain has a bunch of feedback and control systems and works in really weird ways. To suppose that for any given behaviour Y there is always a sequence of inputs X that causes Y is implausible, in the same way that saying for any given physical phenomenon Y there is always a sequence of body movements X that cause Y, so you too can fly at 500mph and punch tornadoes at people if only you had sufficient perfect mastery of your body chi or something.

Toph Bei Fong
Feb 29, 2008



SubG posted:

If you mean that the human could just flip a truly random coin or something sure. But that's not even an interesting quibble in the context of the problem.

And I'm not sure what you mean `by definition' here. There's nothing saying that a person's decision schema can't be as simple as always choosing A if it's Tuesday and B otherwise. I mean I'm not arguing that it's necessarily that simple (and I'm pretty sure it isn't) but the idea isn't incoherent or a contradiction in terms or whatever.

It could be as simple as that, but it could also be infinitely more complex, hence the bringing a friend along examples, which a 100% accurate computer would have to account for to be 100% accurate. Also it would need to account for a sudden shift in personality that day (the person just had a major death in the family as is feeling out of sorts), having a stroke and making a random choice (and thus the computer not putting anything in B) when one otherwise would have ordinarily only taken one box, etc etc. which quickly spirals into the "Holy poo poo, this would need to model a lot more than a simple human mind" to account for. Because 100% accurate means Not Ever Wrong, and Impossible in Real Life Because Computers Can't Model That Much Stuff The Calculation Would Take Forever.

The ordinary personality test, "works with 99% accuracy", type stuff I agree you could probably refine to. It's the 100% accurate "retroactively affects the past" that I quibble with.

I do think it's very interesting that most of the paradoxes discussers think only in terms of modeling one person's mind, though. I find discussing the limits and problems with the question as presented a lot more interesting than the possible solution, personally v:shobon:v

Peel
Dec 3, 2007

You don't need recursive super predictors. You just need to let the AI choose what to put in the box after it's received your decision. Or in a military analogy, let the enemy apportion forces between two targets after their spies tell them your orders about which to attack, but before they are carried out. These are logically identical to an AI that predicts your actions with knowledge of your psychology.

The whole time travel thing is a red herring that makes the problem seem more mysterious and mindblowing than it is. The future can affect the past! No, just people's anticipations of your actions will affect their response to them. It's difficult to think of a more trivial point.

Serious Cephalopod
Jul 1, 2007

This is a Serious post for a Serious thread.

Bloop Bloop Bloop
Pillbug

Peel posted:

You don't need recursive super predictors. You just need to let the AI choose what to put in the box after it's received your decision. Or in a military analogy, let the enemy apportion forces between two targets after their spies tell them your orders about which to attack, but before they are carried out. These are logically identical to an AI that predicts your actions with knowledge of your psychology.

The whole time travel thing is a red herring that makes the problem seem more mysterious and mindblowing than it is. The future can affect the past! No, just people's anticipations of your actions will affect their response to them. It's difficult to think of a more trivial point.

But Yud's entire point with the exercise is to reverse cause and effect because he wants his computer god to be infallible and almighty so he is thought of as a unique thinker.

Every "unique" idea he's had is just him being (in his mind) contrary to the humanities. And all his big thinking I'd to make himself seem important and enlightened to the masses so he can get off on his status as leader. He reminds me if some relevant passages of "why does he do that?" and my own insecurities as a teenager.

Toph Bei Fong
Feb 29, 2008



Peel posted:

You don't need recursive super predictors. You just need to let the AI choose what to put in the box after it's received your decision. Or in a military analogy, let the enemy apportion forces between two targets after their spies tell them your orders about which to attack, but before they are carried out. These are logically identical to an AI that predicts your actions with knowledge of your psychology.

That's not the question as presented, though. The computer makes a prediction (based on a brain scan or whatever), puts the stuff in boxes, and is then powerless to change anything. You then make you choice. If it gets to alter things after you've made your decision, then the experiment is rather different.

But even the military analogy can quite easily grow more complicated than that quickly.

Is there any doubt as to the spies loyalty? How many spies were sent? What if they bring back conflicting information? Which spies can be trusted, if there is more than one? Have they verified this kind of information before? How accurate was that information the last time? What if this time the spies have been turned, because Internal Affairs has uncovered subversive literature planted into their rooms by agents from our side, to cast doubt on the spies completely legitimate information? Do you change your orders based on the knowledge that the plans have been leaked (trivial, given the speed to today's communications)? Can you use the homing signal from the little beacon planted inside the plans to execute a drone strike on the enemy central command, rendering their entire attempt to apportion forces null and void?

quote:

The whole time travel thing is a red herring that makes the problem seem more mysterious and mindblowing than it is. The future can affect the past! No, just people's anticipations of your actions will affect their response to them. It's difficult to think of a more trivial point.

This is 100% correct.

Telarra
Oct 9, 2012

Reminder that Yud invented TDT because he was disappointed with the answer the Bayesian approach gives for this thought experiment.

potatocubed
Jul 26, 2012

*rathian noises*
Also reminder that Yudkowsky's original paper ('paper') where he explains TDT ran to over 100 pages.

It has since been rewritten by one of MIRI's research people into 17 pages.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Moddington posted:

Reminder that Yud invented TDT because he was disappointed with the answer the Bayesian approach gives for this thought experiment.
For someone so obsessed with appearing to be competent in the natural sciences he sure as hell doesn't care a lot about the arrow of loving time.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Mort posted:

http://lesswrong.com/lw/q9/the_failures_of_eld_science/


So is Yudkowsky suggesting that all major theoretical scientific breakthroughs should take less than a month because if so holy lol.

No wonder he loves to stick to fiction. In fiction, everyone can have already invented a unified field theory or a god AI or whatevs by following his simple trick (THE SCIENTIFIC METHOD HATES THIS!) and he can spend the entire article beating off about his thinking methods. The hard part (actual discovery) is easily written as 'And then the smart people doing what I say discovered the thing way faster than the stupid people who do it wrong' and he can spend the whole story smugging it up about how right he was.

Political Whores
Feb 13, 2012

Night10194 posted:

No wonder he loves to stick to fiction. In fiction, everyone can have already invented a unified field theory or a god AI or whatevs by following his simple trick (THE SCIENTIFIC METHOD HATES THIS!) and he can spend the entire article beating off about his thinking methods. The hard part (actual discovery) is easily written as 'And then the smart people doing what I say discovered the thing way faster than the stupid people who do it wrong' and he can spend the whole story smugging it up about how right he was.

I love the "suppose you have exactly the amount of exprimental data you need" part. Like experimentation is just some busy work you need to get out of the way of your really important task of making bullshit up "theorizing".

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Political Whores posted:

I love the "suppose you have exactly the amount of exprimental data you need" part. Like experimentation is just some busy work you need to get out of the way of your really important task of making bullshit up "theorizing".

Tell me about it. I failed as a scientist and have no scientific experience beyond undergrad organic chemistry, but holy poo poo, even just doing titration for those labs and materials analysis for extremely simple experiments designed solely to teach young students how to gather experimental evidence and demonstrate basic principles took a lot of time. I've got friends who work in medical research and the amount of time they spend trying to get the machines and robots working and sort through results is enormous. Yud wants to be the wild Hollywood Scientist who gets The Answer in a short time because the plot demands it and gosh he's so brilliant; he's afraid of doing the real work required.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Class Project posted:

"We know from common sense," Yin said, "that if we stepped outside the universe, we would see time laid out all at once, reality like a crystal. But I once encountered a hint that physics is timeless in a deeper sense than that." Yin's eyes were distant, remembering. "Years ago, I found an abandoned city; it had been uninhabited for eras, I think. And behind a door whose locks were broken, carved into one wall: quote .ua sai .ei mi vimcu ty bu le mekso unquote."

Brennan translated: Eureka! Eliminate t from the equations. And written in Lojban, the sacred language of science, which meant the unknown writer had thought it to be true.

This is what Yudkowsky thinks science looks like.

Political Whores
Feb 13, 2012

Hmm yes "common sense" is a phrase you want to see in a scientific treatise.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Hahaha Lojban loving seriously?

Exit Strategy
Dec 10, 2010

by sebmojo

Cardiovorax posted:

Hahaha Lojban loving seriously?

The man thinks that monolithic RSI AGI can be developed entirely accidentally, of course he thinks Lojban is the perfect language.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
The only particularly nice thing about lojban is that it's really easy to write a parser for. If they were really looking for a pure logical language they would use Ithkuil obviously :smug:. I'm sure they would be able to tell that its 3^^^3 case inflections make it objectively superior.

PS I'm a big conlang nerd.

One Swell Foop
Aug 5, 2010

I'm afraid we have no time for codes and manners.
Anyway, wasn't one of the conclusions of chaos theory that sufficiently complex systems are effectively impossible to accurately predict for anything more than the shortest term?

And isn't the current theory that human cognition relies on processes down to the quantum level, making it essentially impossible to perfectly measure in order to create a model?

Either of which would invalidate most of TDT, relying as it does on perfect modeling and prediction, except as a somewhat interesting thought experiment for solipsists.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

One Swell Foop posted:

And isn't the current theory that human cognition relies on processes down to the quantum level, making it essentially impossible to perfectly measure in order to create a model?
I'm really sceptical about that, personally. The brain is primarily biochemical in its operation, with subatomic particles playing a minor role and only in amounts that are suffiently large that the behaviour of individual particles does not matter except in a statistical sense. Plus, most interesting interactions between quantums break down at higher temperatures, meaning anything hotter than frozen nitrogen in this case. The "theory" seems to be primarily espoused by people who heard about the observer effect and misunderstood it completely.

Munin
Nov 14, 2004


One Swell Foop posted:

And isn't the current theory that human cognition relies on processes down to the quantum level, making it essentially impossible to perfectly measure in order to create a model?

Even statements like that are pure handway speculation.

We currently do not have a solid model of how human cognition arises and even basic models of how it functions are deeply contested. So not even going into the AI bullshit even the "humans function and can be modelled in X way" stuff Yudkowski spouts is also built on clouds and fairytales.

[edit] Just re-reading this I can see how it could sound a bit harsh. Sorry One Swell Foop I in no way meant to sound aggressive towards you. Yours is a perfectly reasonable comment.

Munin fucked around with this message at 19:27 on Oct 29, 2014

Telarra
Oct 9, 2012

Lottery of Babylon posted:

.ua sai .ei mi vimcu ty bu le mekso

As a lojban enthusiast, lol at the pointless 'mi' ("I") and the terrible translation that makes a boatload of assumptions as to what the message meant. Dude could've been working on just one equation, and 't' could've been torque or anything really.

And personally I love all these people who treat lojban like it's some perfect marriage of science and language. Lojban is just an experiment in how close to predicate logic you can get in human language. And while the native speakers we need to really answer this are still forthcoming, it's fairly clear that a different approach would fare better.

Funnily enough, I originally learned about Less Wrong from the lojban chaps.

Telarra fucked around with this message at 19:33 on Oct 29, 2014

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I wonder how Yudkowsky would react to the revelation that science and logic only even interact tangentially.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Cardiovorax posted:

I wonder how Yudkowsky would react to the revelation that science and logic only even interact tangentially.

He would probably bemoan how "Frequentist" scientists are, before spending 3 blog posts ruminating on how to create a 'Bayesian scientific method' and concluding with a really roundabout way of saying "I dunno".

One Swell Foop
Aug 5, 2010

I'm afraid we have no time for codes and manners.

Munin posted:

Even statements like that are pure handway speculation.

We currently do not have a solid model of how human cognition arises and even basic models of how it functions are deeply contested. So not even going into the AI bullshit even the "humans function and can be modelled in X way" stuff Yudkowski spouts is also built on clouds and fairytales.

[edit] Just re-reading this I can see how it could sound a bit harsh. Sorry One Swell Foop I in no way meant to sound aggressive towards you. Yours is a perfectly reasonable comment.

No problem, it's many years since I've been current on scientific literature, which is probably why I still talk about Chaos Theory like it's a thing.

Adbot
ADBOT LOVES YOU

Tunicate
May 15, 2012

Cardiovorax posted:

I'm really sceptical about that, personally. The brain is primarily biochemical in its operation, with subatomic particles playing a minor role and only in amounts that are suffiently large that the behaviour of individual particles does not matter except in a statistical sense. Plus, most interesting interactions between quantums break down at higher temperatures, meaning anything hotter than frozen nitrogen in this case. The "theory" seems to be primarily espoused by people who heard about the observer effect and misunderstood it completely.

It's a chaotic system, though, so even if quantum dice-rolling isn't statistically significant at any given step, it means that the system as a whole is still ultimately non-deterministic.

Easy example would be a C-14 atom decaying and causing a point mutation that alters gene expression, but you don't need anything that direct.

  • Locked thread