Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Doctor Soup
Nov 4, 2009

I have nothing but confidence in you, and very little of that.

Curvature of Earth posted:

I really hope that's just a normal stupid name and not a stupid weeaboo Japanese name.

I have some bad news for you.

Doctor Soup fucked around with this message at 19:49 on Feb 3, 2015

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

Curvature of Earth posted:

I really hope that's just a normal stupid name and not a stupid weeaboo Japanese name.

It's the second one, but specifically it's a western name with the Japanese suffix "sai".

some dude on the internet posted:

Masters of certain arts or culture - swordsmanship, poetry, woodblock prints etc. - were often called by their pseudonyms. And it's true that 齋(or 斎, the same kanji in different glyph) was one of the common suffixes for those pseudonyms in the pre-modern age.

http://forum.wordreference.com/showthread.php?t=2585745

It's from Yudkowsky's fiction about a secret society of master rationalists, the "beisutsukai" (bei-su being a transliteration of 'Bayes', tsukai being 'user'. I looked it up once). Of course it had to be Japanese because Yudkowsky is a huge weeaboo.

I actually think the last of those bits of fiction is legitimately okay, in that it lists three failures of rationalists, two of which are legitimate and people should watch for. I'll strip out the :words: when I list them, if you want, you can read the whole thing here:

http://lesswrong.com/lw/cl/final_words/
I

quote:

- they look harder for flaws in arguments whose conclusions they would rather not accept.

- they invent great complicated plans, theories, and arguments, which are designed to be elegant, not realistic.

- they are underconfident.

Now obviously flaw one is contradictory with Yudkowsky's later advisement to Defy The Data. So it's only a flaw when fictional people do it. Flaw two is just Yudkowsky.txt. And flaw three has never, ever happened.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

SolTerrasa posted:

Flaw two is just Yudkowsky.txt.
Counterpoint: His ideas may not be realistic, but they aren't elegant either.

corn in the bible
Jun 5, 2004

Oh no oh god it's all true!
if japan's so smart why did they get nuked

Peel
Dec 3, 2007

Unfriendly AI using reverse causality to attack people likely to create the Friendly AI that will defeat it. Luckily John Connor Less Wrong is here to save the day.

Darth Walrus
Feb 13, 2012

corn in the bible posted:

if japan's so smart why did they get nuked

It was the radiation that made them superintelligent, duh.

SolTerrasa
Sep 2, 2011

Sham bam bamina! posted:

Counterpoint: His ideas may not be realistic, but they aren't elegant either.

I disagree! And since I'm home sick today, here's :words:.

In my opinion, his ideas do sometimes have some elegance, but it's usually derived from completely ignoring the hard, messy parts, like implementation.

Yudkowsky says (regarding Crystalyn, his hypothetical AI) that "a natural language domdule would presumably contain the ability to notice Chomskian sentence structures, distinguish nouns from verbs, and associate English symbols or phrases with internal concepts."

I spent literal years of research trying to get an AI which already knew what it wanted to say to express its complete thought in human-sounding language. I considered the problem of actually generating thoughts completely out of scope (I'd take any input in FOL, without vetting it), and I even left the input language specification skeletal (but fully general, I'm not a hack).

My design was not elegant, but it was functional. It's stateful and clunky and written in the worst python I have ever had to see. It has the advantage of existing, and of working correctly, but that is the only part of it that is good. It solves one specific problem and it does it really well.

His paragraph about including NL in the language is much more elegant than my paper was. In fact, the whole domdule thing is kind of elegant, in a way. If you could build an AI on the domdule system, it would be elegant. Elegant in the way good Haskell is elegant. Everything is self-contained, it's minimally verbose but sufficiently explanatory, and there's no foreign state screwing up the internals of your program.

(okay, digression. It should have been domule, Yudkowsky. gently caress. "dom" is "mod" backwards, in addition to standing for "domain". The extra d just makes it worse)

Yudkowsky is not so much smarter than me that he can just say "associate English symbols or phrases with internal concepts" and be done with it. I built a system that did that. It isn't small enough to be a throwaway line. To actually do it you need to make all kinds of assumptions about the internals of the AI. How does it represent internal concepts? Given that you're seeking for your AI to understand English, you have more problems. What exactly, in precise technical terms, is meant by "English"? Is association with internal concepts sufficient to demonstrate understanding? Yudkowsky doesn't think so during the AI FOOM debate, where he insults the design of CYC for believing so. In that case, what is understanding, and how do you achieve it?

The hardest part is that once you've made all those decisions, you don't get to go back and answer differently when another subsystem needs to answer the same question. All these implementation decisions are global decisions. There's a reason it's easy to design things and hard to build them.

And I don't even know what he thinks he means by "notice chomskian sentence structures". Maybe he means phrase structure rules? Also called "context-free grammars", also called "the first mistake everyone makes when they try to write an AI that uses English".

Yudkowsky has decent design chops for a fiction writer, but nothing he's ever published is actually buildable without years more specification (and honestly, I'm skeptical even then. I won't believe it until I see a demo).

A Man With A Plan
Mar 29, 2010
Fallen Rib
The idea of "dangerous" AI has been in the news some lately, (Stephen Hawking and Elon Musk both said something stupid about it) so the next time you have to deal with a Yuddite or a concerned mother, you can point to this article. It's an interview with Andrew Ng, who is one of the best known real AI researchers in the world. Most of it is specific to the challenge of dealing with Chinese human language tech, but at the end he addresses the concern about hostile AIs.

https://medium.com/backchannel/google-brains-co-inventor-tells-why-hes-building-chinese-neural-networks-662d03a8b548

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

GottaPayDaTrollToll posted:

If he'd been around in those days he'd take one look at the Michelson-Morley experiment and dismiss the results on the basis that they didn't match his theory. He wrote an article on "defying the data" which laments that scientists aren't allowed to just stick their fingers in their ears and say "nuh uh!" when confronted with experimental results they don't agree with.

Holy poo poo just at the first line:

Yud posted:

One of the great weaknesses of Science is this mistaken idea that if an experiment contradicts the dominant theory, we should throw out the theory instead of the experiment.

LemonDrizzle
Mar 28, 2012

neoliberal shithead

Pavlov posted:

Holy poo poo just at the first line:

The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated."

A Wizard of Goatse
Dec 14, 2014

LemonDrizzle posted:

The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated."

Yeah he seems just sort of oblivious to the idea that respected scientific ideas get tested more than once, and wants to invent controlling for variables and peer review as his own personal Grand Theory of Thinkology complete with terrible 'cool'-sounding catchphrase.

Epitope
Nov 27, 2006

Grimey Drawer

LemonDrizzle posted:

The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated."

I have this really great book you should check out, it's call Dianetics, I think you'll really enjoy it.

But seriously. How is being a pompous twat about how to do science, while not knowing how we do science, not objectionable? He says it's a big problem we do things no one does, then suggests groundbreaking new practices that are already commonplace.

The Vosgian Beast
Aug 13, 2011

Business is slow

Pavlov posted:

Holy poo poo just at the first line:

I like how this the opposite of the complaint literally everyone else has.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

quote:

the "beisutsukai" (bei-su being a transliteration of 'Bayes', tsukai being 'user')

Oh my god. Such a pity. This is some hardcore Lyttle Lytton material that Adam Cadre will never be able to appreciate properly because he hasn't had an introduction to Yudkowskology.

A real contender for winning entry of 2015, I swear to god posted:

The dejected beisutsukai shook in terror, the smoking remains of their enpitsu-kami-super-sentai-megazordu laying beside them. They knew that the punishment for their lack of tsuyoki naritai and failing Eriezer-sensei was facing the Roko no Basirisku.

Triple Elation fucked around with this message at 15:06 on Feb 4, 2015

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Triple Elation posted:

Oh my god. Such a pity. This is some hardcore Lyttle Lytton material that Adam Cadre will never be able to appreciate properly because he hasn't had an introduction to Yudkowskology.
I'm pretty sure that he'd at least appreciate enpitsu-kami-super-sentai-megazordu. :allears:

(Also, my knowledge of Japanese is essentially nil, but I do know that "n" is the only consonant that doesn't require a vowel after it. How hasn't this out-and-out weeaboo managed to pick up on that? And what rule does he think he's following when he does slap random "u"s on the ends of words?)

DAD LOST MY IPOD
Feb 3, 2012

Fats Dominar is on the case


SolTerrasa posted:

Yudkowsky is not so much smarter than me that he can just say "associate English symbols or phrases with internal concepts"


And I don't even know what he thinks he means by "notice chomskian sentence structures".

Zeroing in on this: am I misreading, or are you implying you think that The Yud is smarter than you? Because it seems like his idea is basically "what if we could make a computer that thinks like a person" without any idea of how that might be done or any recognition of what the challenges could be. What drives me nuts about Yud is how he writes sci-fi fan fiction about computers but acts as though he invented them. We don't think of George Lucas as the genius who invented the hyperdrive. Yud has never produced any actual interesting code or revolutionary cs implementations.

Also I think the Chomskian sentences thing means sentences like "colorless green ideas sleep furiously" ie grammatically correct but incoherent sentences.

Four Score
Feb 27, 2014

by zen death robot
Lipstick Apathy

Sham bam bamina! posted:

I'm pretty sure that he'd at least appreciate enpitsu-kami-super-sentai-megazordu. :allears:

(Also, my knowledge of Japanese is essentially nil, but I do know that "n" is the only consonant that doesn't require a vowel after it. How hasn't this out-and-out weeaboo managed to pick up on that? And what rule does he think he's following when he does slap random "u"s on the ends of words?)

enpitsu-kami-super-sentai-megazordu is gibberish in both english and glorious nipponese, hth

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

First Bass posted:

enpitsu-kami-super-sentai-megazordu is gibberish in both english and glorious nipponese, hth
Obviously. My point with the "essentially nil" bit was that you don't even have to know the language to point out that this stuff is bullshit even on the basic, basic level of its "transliteration".

Has Yudkowsky written any good articles explaining how anime is ~rationally~ the greatest thing ever? I wouldn't put it an inch past him.

Sham bam bamina! fucked around with this message at 17:09 on Feb 4, 2015

Germstore
Oct 17, 2012

A Serious Candidate For a Serious Time

DAD LOST MY IPOD posted:

Zeroing in on this: am I misreading, or are you implying you think that The Yud is smarter than you? Because it seems like his idea is basically "what if we could make a computer that thinks like a person" without any idea of how that might be done or any recognition of what the challenges could be. What drives me nuts about Yud is how he writes sci-fi fan fiction about computers but acts as though he invented them. We don't think of George Lucas as the genius who invented the hyperdrive. Yud has never produced any actual interesting code or revolutionary cs implementations.

Also I think the Chomskian sentences thing means sentences like "colorless green ideas sleep furiously" ie grammatically correct but incoherent sentences.

I'm wondering about this too. An inelegant implementation even if with a much smaller scope is far more impressive than a seemingly elegant idea where the hard parts are glossed over.

Toph Bei Fong
Feb 29, 2008



Germstore posted:

I'm wondering about this too. An inelegant implementation even if with a much smaller scope is far more impressive than a seemingly elegant idea where the hard parts are glossed over.

I think SolTerrasa means an elegant statement of the problem, and the limits of the problem, that would be addressed. This is important to keep the project from spiraling out of control, losing focus, turning into one of those projects that doesn't know what it's trying to do because the marketing department decided it needs to do something else ("I love your underwater barbeque. Love it. But does it have to be a barbeque? And does it have to be underwater?").

And that is important, in a way. But it's not that hard, and doesn't really require a genius to write. Even Google's "We went to build the Star Trek Computer" is good enough, if you've seen Star Trek and know what it means (Voice activated, instant searching, delivers the answer rather than websites, etc.).

And, as others have said, it's way more important to have a working model, or, indeed, anything that functions at all, than to have just stated what you want, regardless of wording.

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Darth Walrus posted:

It was the radiation that made them superintelligent, duh.

I have read actual speculative fiction that argues this point.

Well, by "read," I mean "read five pages and put the book down forever," but I feel that's enough.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Antivehicular posted:

I have read actual speculative fiction that argues this point.

Well, by "read," I mean "read five pages and put the book down forever," but I feel that's enough.
Please tell me the book; I have to read this.

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Sham bam bamina! posted:

Please tell me the book; I have to read this.

I (un)fortunately don't remember; it was the first story in a collection of Japan-themed SF that must have come out during the period where the assumption was that Japan was a universal techno-utopia that was going to eat everyone's lunch. It involved two MODERN JAPANESE UBERMENSCHEN, at least one of whom was named or nicknamed "Harry," talking about new research about how the atomic bombs had actually strengthened their racial genetic code and allowed for their current socioeconomic dominance, so thanks, Harry Truman!

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
Uh, just to make things clear, the enpitsu-kami-super-sentai-megazordu sentence was my attempt at short-form fanfiction set in the beisutsukai-verse. It's not actually in any of Eriezer-sensei's prose. That I know of.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Triple Elation posted:

Uh, just to make things clear, the enpitsu-kami-super-sentai-megazordu sentence was my attempt at short-form fanfiction set in the beisutsukai-verse. It's not actually in any of Eriezer-sensei's prose. That I know of.
Man, gently caress you for putting that in a quote box. :arghfist::(

SolTerrasa
Sep 2, 2011

DAD LOST MY IPOD posted:

Zeroing in on this: am I misreading, or are you implying you think that The Yud is smarter than you?

...

Also I think the Chomskian sentences thing means sentences like "colorless green ideas sleep furiously" ie grammatically correct but incoherent sentences.

Not in those terms, no. Unlike Yudkowsky, I don't believe in the g factor, so even if I wanted to express an egotistical "level above mine" style thought about him, I'd have to say it in terms of skills, not in terms of raw intelligence. I'm just hedging against being wrong. I'm plenty proud of what I did; it would take a hell of a demo from Yud for me to say that he's a better AI researcher than me. And at least five years of sustained, excellent output for him to be the best I know personally. I wouldn't bet on it.

SolTerrasa
Sep 2, 2011

Sham bam bamina! posted:

Has Yudkowsky written any good articles explaining how anime is ~rationally~ the greatest thing ever? I wouldn't put it an inch past him.

I'd try to transcribe it but I'm on a bus and no one would appreciate that.

Eliezer Yudkowsky - Less Wrong Q&A (15/30): http://youtu.be/1uf9aPA8dcw

Chamale
Jul 11, 2010

I'm helping!



Sham bam bamina! posted:

Obviously. My point with the "essentially nil" bit was that you don't even have to know the language to point out that this stuff is bullshit even on the basic, basic level of its "transliteration".

Has Yudkowsky written any good articles explaining how anime is ~rationally~ the greatest thing ever? I wouldn't put it an inch past him.

quote:

"I suspect the aliens will consider this one of their great historical works of literature, like Hamlet or Fate/stay night"

Doctor Soup
Nov 4, 2009

I have nothing but confidence in you, and very little of that.

quote:

Or possibly even the greatest historical works, like the much-superior Hamlet Fate/Stay Night crossover fanfic Ham/Stay Ham by Rationalgokuspanties87.

The Monkey Man
Jun 10, 2012

HERD U WERE TALKIN SHIT

quote:

"I suspect the aliens will consider this one of their great historical works of literature, like Hamlet or Fate/stay night"

When I first read this, I didn't know anything about Yudkowsky and assumed that that he was just ripping off the joke in Sleeper where Margaret Keane was regarded as a great artist 200 years in the future.

DAD LOST MY IPOD
Feb 3, 2012

Fats Dominar is on the case


SolTerrasa posted:

Not in those terms, no. Unlike Yudkowsky, I don't believe in the g factor, so even if I wanted to express an egotistical "level above mine" style thought about him, I'd have to say it in terms of skills, not in terms of raw intelligence. I'm just hedging against being wrong. I'm plenty proud of what I did; it would take a hell of a demo from Yud for me to say that he's a better AI researcher than me. And at least five years of sustained, excellent output for him to be the best I know personally. I wouldn't bet on it.

Ok good, because questions of quantifying intelligence are all well and good, but I'm not going to lose sleep questioning the epistemological basis of the statement "SolTerrasa is smarter than Eliezer Yudkowsky."

SolTerrasa
Sep 2, 2011

SolTerrasa posted:

I'd try to transcribe it but I'm on a bus and no one would appreciate that.

Eliezer Yudkowsky - Less Wrong Q&A (15/30): http://youtu.be/1uf9aPA8dcw

What he says posted:

Well, as a matter of cold calculation, I decided that... eh, it's anime. [laughter].

How has it affected my thinking? Well, I suppose that you could view it as a continuity of sort of reading the, you know, sort of the dribs and drabs of westernized eastern philosophy from Godel, Escher, Bach. Concepts like <he says some japanese>, meaning "I want to become stronger", are things that, ah, um, being exposed to the alternative eastern culture as found in anime, um, might have caused me to develop concepts of, but on the whole, it's anime, there's not some kind of elaborate calculation behind it. You know, when I'm encountering a daily problem... I'm not sure that studying anime has changed the way I think all that much.

So, nothing too interesting there.

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Big Yud, via SolTerrasa posted:

Concepts like <he says some japanese>, meaning "I want to become stronger", are things that, ah, um, being exposed to the alternative eastern culture as found in anime, um, might have caused me to develop concepts of

Self-improvement: an esoteric concept to be found only in anime. You heard it here first, guys!

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

SolTerrasa posted:

Well, I suppose that you could view it as a continuity of sort of reading the, you know, sort of the dribs and drabs of westernized eastern philosophy from Godel, Escher, Bach.
In the sense that anime comes from Asia and Hofstadter mentions Asian ideas somewhere in there, yes. He might as well try to connect it all to the band Asia at this point.

Political Whores
Feb 13, 2012

gently caress this dude for missing the point of Gödel, Escher, Bach.

E: gently caress, beaten.

su3su2u1
Apr 23, 2014
I'm very drunk, but had an insight...

Lets say you started an organization whose goal was to build a godlike-AI... you've taken the money... but you don't like work, and this seems really hard. Why not just write a metric ton of words to the effect of "it would be super dangerous to build this thing I promised, so I won't until I finish mathwanking over here in the corner."

A Wizard of Goatse
Dec 14, 2014

su3su2u1 posted:

I'm very drunk, but had an insight...

Lets say you started an organization whose goal was to build a godlike-AI... you've taken the money... but you don't like work, and this seems really hard. Why not just write a metric ton of words to the effect of "it would be super dangerous to build this thing I promised, so I won't until I finish mathwanking over here in the corner."

This implies ever promising a real deliverable in the first place, so long as the terminators aren't stomping on your bones MIRI's doin' its job

Four Score
Feb 27, 2014

by zen death robot
Lipstick Apathy

Sham bam bamina! posted:

In the sense that anime comes from Asia and Hofstadter mentions Asian ideas somewhere in there, yes. He might as well try to connect it all to the band Asia at this point.

tbf "Heat of the Moment" is a cultural masterpiece

Peztopiary
Mar 16, 2009

by exmarx

su3su2u1 posted:

I'm very drunk, but had an insight...

Lets say you started an organization whose goal was to build a godlike-AI... you've taken the money... but you don't like work, and this seems really hard. Why not just write a metric ton of words to the effect of "it would be super dangerous to build this thing I promised, so I won't until I finish mathwanking over here in the corner."

Because that stops you from getting more of the rubes' money. What you do is you pretend to get asymptotically closer to your goal. Every time you could reasonably be expected to hit it within a few years introduce another angle. The sunk cost fallacy will keep people who've fallen under your spell from ever pulling all the way out. They'll write apologetics for you, so you can focus on stacking those fat stacks.

Adbot
ADBOT LOVES YOU

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
I want to note one of the greatest contributions of LessWrong to the English language: the word phyg.

Go ahead, Google it

  • Locked thread