Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

rrrrrrrrrrrt posted:

Yudkowsky reminds me of Ulillillia, right down to the belief that nothing is TRULY impossible because quantum reasons. Roko's Basilisk or whatever is The Blanket Trick: far too dangerous to talk about.

The comparison is even closer because Yudkowsky actually hates science, in that he believes the scientific method is bad methodology because having to make a claim that can be disproven in order to be taken seriously is for scrubs, real rationalists use their mental disorders a priori knowledge their mastery of bayes to pluck correct theories from the aether.

atelier morgan fucked around with this message at 05:24 on Apr 23, 2014

Adbot
ADBOT LOVES YOU

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Lottery of Babylon posted:

Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? :psyduck: I suspect you're right, though. Here's something he said about his AI-Box experiments:


"Hating losing is what drives me to keep going! Anyhow, when I lost I raged and gave up, and any time you're proven wrong your time and resources have gone down the drain to no benefit at all."

Yudkowsky believes the important part of science is coming up with the hypothesis to test (finding the correct position in possibility space hurg blurf), the rest is just a big waste of time for children because if you're right you're right and that's why he doesn't care he has no qualifications or track record on anything he claims to study.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

The Cheshire Cat posted:

"AI Philosopher" might be a better term. I mean it's still giving him a lot of credit to call him a philosopher given how flawed his system of reasoning seems to be, but it does at least describe what he's attempting to do. He spends all his time discussing theory, because discussing theory is easier than actually testing those theories. Yeah, a lot of science is based on theoretical conjecture, but the difference is that those theories are put to the test and actually observed in practice before we really take them too seriously.

You'll find if you bother to read the sequences, plebe, that coming up with the hypothesis from the infinite expanse of probability space is far harder than anything mere scientists do :smug:

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

The Vosgian Beast posted:

Lemme put it this way: Less Wrong is not Jonas Salk. Less Wrong is a guy claiming we should build giant space guns to shoot diseases out of people's bodies, and anyone saying that all our resources into this is a "sickist"

This, or basically if you read anything on there and think 'hmm this strikes me as reasonable' then:

1) It was an idea somebody else came up with being republished there with or without attribution
2) You'll find another article on the same subject that will make it entirely unreasonable again

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Lottery of Babylon posted:

Oddly enough, for all Yudkowsky's hatred of peer review I can't seem to find any articles by him addressing it. You'd think anything he hated enough to mock through My Little Pony fanfiction he'd hate enough to devote just one of his :words: articles to it, addressing why peer review is so awful. Maybe at some level he realizes he doesn't actually have any leg to stand on here and is just bitter that Nature won't publish his Harry Potter fanfic?

Since it would look really bad for his bilking people finances to have it on record anywhere that he hates peer review he likes to hide it in metaphor, but there's plenty on the LW site about it, the entire 'bayesian conspiracy' thing is premised on peer review actively harming the advance of knowledge.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy
As somebody who had a serious interest in Philosophy of Mind and ended up moonwalking out of the field in college because of how hosed the discussion around Qualia gets you really don't want to go down this particular rabbit hole.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

AlbieQuirky posted:

I think the field with the most solid grasp on qualia is neuroscience. I liked Patricia Smith Churchland's most recent book (Touching a Nerve: The Self as Brain) a lot, though.

I happen to agree with her quite a lot and the field would be a lot more bearable if more people adopted the Churchlands' approach of actually using physical science.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

SolTerrasa posted:

The mantra they use to convince themselves that they are not horrible monsters is "shut up and multiply", as in "your dislike of torture is wrong because of large numbers." Where a normal person would take this lesson to mean "hm, utilitarianism might not make sense at such a large scale, I can't imagine EVER finding dust specks comparable to torture no matter what the number of people", they take the exact opposite lesson.

There's also the small problem that the whole idea of utilons as an atomic unit of utility you can just multiple everything to is a twisted and wrong misinterpretation of Bentham, and Mill (who was far more influential on every other philosopher of utilitarianism) started out by stating unequivocally that utility cannot be defined as strictly quantifiable under any circumstances in 1861.

It's been a strawman of utilitarianism for 150 years.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Lightanchor posted:

All utilitarianism, no matter how good, is bad.

Given how much I hate metaethics I don't even disagree with you.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

pentyne posted:

My guess? It reduces everything abstract and unique about the human condition to an excel table where you want the box at the bottom to have the highest possible value by coldly manipulating everything possible regardless of any other negative outcomes, harm or injury it causes because making that value the greatest trumps all other concerns.

Yud and anyone like him espousing utilitarianism use it more as an excuse for why their ideas should be followed rather then a comprehensive set of philosophical goals to work towards. Oh, and for all of Yud's proclamations about such things, he still spends more time writing Harry Potter self insert fanfiction then saving the future of the world with producing his AI research.

Much like his timeless formulation of quantum, Big Yud's formulation of utilitarianism is so inaccurate it's not even wrong.

Which is one way to be less wrong!

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy
Every single formulation of ethics has stronger arguments against it than for it so pick whichever you like pretty much. Just don't pick something that isn't even ethics like Big Yud.


Boing posted:

Just to clarify: I'm not pushing disgust ethics, I'm commenting on the difficulty of validating or comparing ethical systems when our only real anchoring point to them is through moral intuitions. I'm a psychologist, not a philosopher, so I really enjoy meta-ethics and intuitive moral behaviour :)

This discussion isn't even really meta-ethics because everybody posting already agrees that moral judgments exist and have prepositional value. Actual meta-ethics are far more infuriating.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Cardiovorax posted:

It's categorically better than letting humanity go extinct wholesale, but I agree with the practical concerns. Mars is solid, it has no core. By the time we have the technology to give it a magnetosphere, we're probably rather more than capable of just building orbital habitats or whatever.

Building orbital habitats would be much, much easier than providing Mars with an atmosphere anyway.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Namarrgon posted:

That's not really true. It's not as if you can just open up the fuel hatch in space and have fuel magically pop in, it takes advanced technology to find it and fuel and a shitload of time to reach it, space is almost incomprehensibly big. Planets (Earth at least) also provide atmospheric and magnetic projection from space debris and runaway radiation. Assuming the ultimate solution to the technological problems, space habitats are likely the (relatively) low price, high cost maintenance option and planets the very high price, very low maintenance option.

Maintaining large orbitals would be easier than any sort of presence on a non-earth planet. Atmospheres are corrosive and dangerous (and planets without atmospheres have very little to recommend them), being at the bottom of a gravity well makes you more distant in energy/cost terms than actual distance makes it seem and high pressures are much more expensive and dangerous to engineer around than space.

The bigger problem that prevents any sort of extraplanetary (or large undersea!) outpost at all is that we have not actually figured out how to run sufficiently complex closed ecological systems yet.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

The Vosgian Beast posted:

This is all true, but Yudkowsky always struck me as someone who would fly into a rage at the mention of the word relativism. Maybe I'm wrong.

That's why he wrote a short story about moral relativism and then wrote an ending essentially going 'well I've written myself into a rhetorical hole I'm not good enough at philosophy to get out of so I guess the protagonist just rejects it anyway for reasons?'

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Monocled Falcon posted:

What story was that?

That doesn't sound like Three Worlds Collide.

It is, he wrote multiple endings to it because he didn't like having to accept the unacceptable.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Nessus posted:

What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results.

Already started happening years ago.

Adbot
ADBOT LOVES YOU

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Mort posted:

http://lesswrong.com/lw/q9/the_failures_of_eld_science/


So is Yudkowsky suggesting that all major theoretical scientific breakthroughs should take less than a month because if so holy lol.

He's suggesting that the scientific method is a bad method of inquiry because it is too slow and the vast majority of the work in solving a problem comes from deciding on a hypothesis from the infinity that is "idea space".

This is a huge theme throughout everything he writes, because he is a quintessential ideas guy who wants to believe in his own importance.

  • Locked thread