|
Curvature of Earth posted:I really hope that's just a normal stupid name and not a stupid weeaboo Japanese name. I have some bad news for you. Doctor Soup fucked around with this message at 19:49 on Feb 3, 2015 |
# ? Feb 3, 2015 19:47 |
|
|
# ? May 4, 2024 20:44 |
|
Curvature of Earth posted:I really hope that's just a normal stupid name and not a stupid weeaboo Japanese name. It's the second one, but specifically it's a western name with the Japanese suffix "sai". some dude on the internet posted:Masters of certain arts or culture - swordsmanship, poetry, woodblock prints etc. - were often called by their pseudonyms. And it's true that 齋(or 斎, the same kanji in different glyph) was one of the common suffixes for those pseudonyms in the pre-modern age. http://forum.wordreference.com/showthread.php?t=2585745 It's from Yudkowsky's fiction about a secret society of master rationalists, the "beisutsukai" (bei-su being a transliteration of 'Bayes', tsukai being 'user'. I looked it up once). Of course it had to be Japanese because Yudkowsky is a huge weeaboo. I actually think the last of those bits of fiction is legitimately okay, in that it lists three failures of rationalists, two of which are legitimate and people should watch for. I'll strip out the when I list them, if you want, you can read the whole thing here: http://lesswrong.com/lw/cl/final_words/ I quote:- they look harder for flaws in arguments whose conclusions they would rather not accept. Now obviously flaw one is contradictory with Yudkowsky's later advisement to Defy The Data. So it's only a flaw when fictional people do it. Flaw two is just Yudkowsky.txt. And flaw three has never, ever happened.
|
# ? Feb 3, 2015 21:07 |
|
SolTerrasa posted:Flaw two is just Yudkowsky.txt.
|
# ? Feb 3, 2015 21:16 |
|
if japan's so smart why did they get nuked
|
# ? Feb 3, 2015 21:19 |
|
Unfriendly AI using reverse causality to attack people likely to create the Friendly AI that will defeat it. Luckily
|
# ? Feb 3, 2015 21:25 |
|
corn in the bible posted:if japan's so smart why did they get nuked It was the radiation that made them superintelligent, duh.
|
# ? Feb 3, 2015 22:03 |
|
Sham bam bamina! posted:Counterpoint: His ideas may not be realistic, but they aren't elegant either. I disagree! And since I'm home sick today, here's . In my opinion, his ideas do sometimes have some elegance, but it's usually derived from completely ignoring the hard, messy parts, like implementation. Yudkowsky says (regarding Crystalyn, his hypothetical AI) that "a natural language domdule would presumably contain the ability to notice Chomskian sentence structures, distinguish nouns from verbs, and associate English symbols or phrases with internal concepts." I spent literal years of research trying to get an AI which already knew what it wanted to say to express its complete thought in human-sounding language. I considered the problem of actually generating thoughts completely out of scope (I'd take any input in FOL, without vetting it), and I even left the input language specification skeletal (but fully general, I'm not a hack). My design was not elegant, but it was functional. It's stateful and clunky and written in the worst python I have ever had to see. It has the advantage of existing, and of working correctly, but that is the only part of it that is good. It solves one specific problem and it does it really well. His paragraph about including NL in the language is much more elegant than my paper was. In fact, the whole domdule thing is kind of elegant, in a way. If you could build an AI on the domdule system, it would be elegant. Elegant in the way good Haskell is elegant. Everything is self-contained, it's minimally verbose but sufficiently explanatory, and there's no foreign state screwing up the internals of your program. (okay, digression. It should have been domule, Yudkowsky. gently caress. "dom" is "mod" backwards, in addition to standing for "domain". The extra d just makes it worse) Yudkowsky is not so much smarter than me that he can just say "associate English symbols or phrases with internal concepts" and be done with it. I built a system that did that. It isn't small enough to be a throwaway line. To actually do it you need to make all kinds of assumptions about the internals of the AI. How does it represent internal concepts? Given that you're seeking for your AI to understand English, you have more problems. What exactly, in precise technical terms, is meant by "English"? Is association with internal concepts sufficient to demonstrate understanding? Yudkowsky doesn't think so during the AI FOOM debate, where he insults the design of CYC for believing so. In that case, what is understanding, and how do you achieve it? The hardest part is that once you've made all those decisions, you don't get to go back and answer differently when another subsystem needs to answer the same question. All these implementation decisions are global decisions. There's a reason it's easy to design things and hard to build them. And I don't even know what he thinks he means by "notice chomskian sentence structures". Maybe he means phrase structure rules? Also called "context-free grammars", also called "the first mistake everyone makes when they try to write an AI that uses English". Yudkowsky has decent design chops for a fiction writer, but nothing he's ever published is actually buildable without years more specification (and honestly, I'm skeptical even then. I won't believe it until I see a demo).
|
# ? Feb 3, 2015 22:27 |
|
The idea of "dangerous" AI has been in the news some lately, (Stephen Hawking and Elon Musk both said something stupid about it) so the next time you have to deal with a Yuddite or a concerned mother, you can point to this article. It's an interview with Andrew Ng, who is one of the best known real AI researchers in the world. Most of it is specific to the challenge of dealing with Chinese human language tech, but at the end he addresses the concern about hostile AIs. https://medium.com/backchannel/google-brains-co-inventor-tells-why-hes-building-chinese-neural-networks-662d03a8b548
|
# ? Feb 4, 2015 00:30 |
|
GottaPayDaTrollToll posted:If he'd been around in those days he'd take one look at the Michelson-Morley experiment and dismiss the results on the basis that they didn't match his theory. He wrote an article on "defying the data" which laments that scientists aren't allowed to just stick their fingers in their ears and say "nuh uh!" when confronted with experimental results they don't agree with. Holy poo poo just at the first line: Yud posted:One of the great weaknesses of Science is this mistaken idea that if an experiment contradicts the dominant theory, we should throw out the theory instead of the experiment.
|
# ? Feb 4, 2015 00:38 |
|
Pavlov posted:Holy poo poo just at the first line: The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated."
|
# ? Feb 4, 2015 00:57 |
|
LemonDrizzle posted:The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated." Yeah he seems just sort of oblivious to the idea that respected scientific ideas get tested more than once, and wants to invent controlling for variables and peer review as his own personal Grand Theory of Thinkology complete with terrible 'cool'-sounding catchphrase.
|
# ? Feb 4, 2015 01:00 |
|
LemonDrizzle posted:The clarification in the second paragraph is perfectly reasonable and true, though - you should be sceptical of seemingly inexplicable experimental results, especially if they seem to contradict established theories or principles. He writes like a pompous twat, but there's nothing really objectionable there. The only thing he's wrong about is that it's perfectly acceptable to say "I don't believe this result" or "I won't believe this result until it's replicated." I have this really great book you should check out, it's call Dianetics, I think you'll really enjoy it. But seriously. How is being a pompous twat about how to do science, while not knowing how we do science, not objectionable? He says it's a big problem we do things no one does, then suggests groundbreaking new practices that are already commonplace.
|
# ? Feb 4, 2015 01:57 |
|
Pavlov posted:Holy poo poo just at the first line: I like how this the opposite of the complaint literally everyone else has.
|
# ? Feb 4, 2015 02:27 |
|
quote:the "beisutsukai" (bei-su being a transliteration of 'Bayes', tsukai being 'user') Oh my god. Such a pity. This is some hardcore Lyttle Lytton material that Adam Cadre will never be able to appreciate properly because he hasn't had an introduction to Yudkowskology. A real contender for winning entry of 2015, I swear to god posted:The dejected beisutsukai shook in terror, the smoking remains of their enpitsu-kami-super-sentai-megazordu laying beside them. They knew that the punishment for their lack of tsuyoki naritai and failing Eriezer-sensei was facing the Roko no Basirisku. Triple Elation fucked around with this message at 15:06 on Feb 4, 2015 |
# ? Feb 4, 2015 15:04 |
|
Triple Elation posted:Oh my god. Such a pity. This is some hardcore Lyttle Lytton material that Adam Cadre will never be able to appreciate properly because he hasn't had an introduction to Yudkowskology. (Also, my knowledge of Japanese is essentially nil, but I do know that "n" is the only consonant that doesn't require a vowel after it. How hasn't this out-and-out weeaboo managed to pick up on that? And what rule does he think he's following when he does slap random "u"s on the ends of words?)
|
# ? Feb 4, 2015 16:16 |
|
SolTerrasa posted:Yudkowsky is not so much smarter than me that he can just say "associate English symbols or phrases with internal concepts" Zeroing in on this: am I misreading, or are you implying you think that The Yud is smarter than you? Because it seems like his idea is basically "what if we could make a computer that thinks like a person" without any idea of how that might be done or any recognition of what the challenges could be. What drives me nuts about Yud is how he writes sci-fi fan fiction about computers but acts as though he invented them. We don't think of George Lucas as the genius who invented the hyperdrive. Yud has never produced any actual interesting code or revolutionary cs implementations. Also I think the Chomskian sentences thing means sentences like "colorless green ideas sleep furiously" ie grammatically correct but incoherent sentences.
|
# ? Feb 4, 2015 16:31 |
|
Sham bam bamina! posted:I'm pretty sure that he'd at least appreciate enpitsu-kami-super-sentai-megazordu. enpitsu-kami-super-sentai-megazordu is gibberish in both english and glorious nipponese, hth
|
# ? Feb 4, 2015 17:00 |
|
First Bass posted:enpitsu-kami-super-sentai-megazordu is gibberish in both english and glorious nipponese, hth Has Yudkowsky written any good articles explaining how anime is ~rationally~ the greatest thing ever? I wouldn't put it an inch past him. Sham bam bamina! fucked around with this message at 17:09 on Feb 4, 2015 |
# ? Feb 4, 2015 17:07 |
|
DAD LOST MY IPOD posted:Zeroing in on this: am I misreading, or are you implying you think that The Yud is smarter than you? Because it seems like his idea is basically "what if we could make a computer that thinks like a person" without any idea of how that might be done or any recognition of what the challenges could be. What drives me nuts about Yud is how he writes sci-fi fan fiction about computers but acts as though he invented them. We don't think of George Lucas as the genius who invented the hyperdrive. Yud has never produced any actual interesting code or revolutionary cs implementations. I'm wondering about this too. An inelegant implementation even if with a much smaller scope is far more impressive than a seemingly elegant idea where the hard parts are glossed over.
|
# ? Feb 4, 2015 17:13 |
|
Germstore posted:I'm wondering about this too. An inelegant implementation even if with a much smaller scope is far more impressive than a seemingly elegant idea where the hard parts are glossed over. I think SolTerrasa means an elegant statement of the problem, and the limits of the problem, that would be addressed. This is important to keep the project from spiraling out of control, losing focus, turning into one of those projects that doesn't know what it's trying to do because the marketing department decided it needs to do something else ("I love your underwater barbeque. Love it. But does it have to be a barbeque? And does it have to be underwater?"). And that is important, in a way. But it's not that hard, and doesn't really require a genius to write. Even Google's "We went to build the Star Trek Computer" is good enough, if you've seen Star Trek and know what it means (Voice activated, instant searching, delivers the answer rather than websites, etc.). And, as others have said, it's way more important to have a working model, or, indeed, anything that functions at all, than to have just stated what you want, regardless of wording.
|
# ? Feb 4, 2015 18:08 |
|
Darth Walrus posted:It was the radiation that made them superintelligent, duh. I have read actual speculative fiction that argues this point. Well, by "read," I mean "read five pages and put the book down forever," but I feel that's enough.
|
# ? Feb 4, 2015 18:36 |
|
Antivehicular posted:I have read actual speculative fiction that argues this point.
|
# ? Feb 4, 2015 18:38 |
|
Sham bam bamina! posted:Please tell me the book; I have to read this. I (un)fortunately don't remember; it was the first story in a collection of Japan-themed SF that must have come out during the period where the assumption was that Japan was a universal techno-utopia that was going to eat everyone's lunch. It involved two MODERN JAPANESE UBERMENSCHEN, at least one of whom was named or nicknamed "Harry," talking about new research about how the atomic bombs had actually strengthened their racial genetic code and allowed for their current socioeconomic dominance, so thanks, Harry Truman!
|
# ? Feb 4, 2015 18:48 |
|
Uh, just to make things clear, the enpitsu-kami-super-sentai-megazordu sentence was my attempt at short-form fanfiction set in the beisutsukai-verse. It's not actually in any of Eriezer-sensei's prose. That I know of.
|
# ? Feb 4, 2015 18:53 |
|
Triple Elation posted:Uh, just to make things clear, the enpitsu-kami-super-sentai-megazordu sentence was my attempt at short-form fanfiction set in the beisutsukai-verse. It's not actually in any of Eriezer-sensei's prose. That I know of.
|
# ? Feb 4, 2015 19:18 |
|
DAD LOST MY IPOD posted:Zeroing in on this: am I misreading, or are you implying you think that The Yud is smarter than you? Not in those terms, no. Unlike Yudkowsky, I don't believe in the g factor, so even if I wanted to express an egotistical "level above mine" style thought about him, I'd have to say it in terms of skills, not in terms of raw intelligence. I'm just hedging against being wrong. I'm plenty proud of what I did; it would take a hell of a demo from Yud for me to say that he's a better AI researcher than me. And at least five years of sustained, excellent output for him to be the best I know personally. I wouldn't bet on it.
|
# ? Feb 4, 2015 19:24 |
|
Sham bam bamina! posted:Has Yudkowsky written any good articles explaining how anime is ~rationally~ the greatest thing ever? I wouldn't put it an inch past him. I'd try to transcribe it but I'm on a bus and no one would appreciate that. Eliezer Yudkowsky - Less Wrong Q&A (15/30): http://youtu.be/1uf9aPA8dcw
|
# ? Feb 4, 2015 19:31 |
|
Sham bam bamina! posted:Obviously. My point with the "essentially nil" bit was that you don't even have to know the language to point out that this stuff is bullshit even on the basic, basic level of its "transliteration". quote:"I suspect the aliens will consider this one of their great historical works of literature, like Hamlet or Fate/stay night"
|
# ? Feb 4, 2015 20:33 |
|
quote:Or possibly even the greatest historical works, like the much-superior Hamlet Fate/Stay Night crossover fanfic Ham/Stay Ham by Rationalgokuspanties87.
|
# ? Feb 4, 2015 20:49 |
|
quote:"I suspect the aliens will consider this one of their great historical works of literature, like Hamlet or Fate/stay night" When I first read this, I didn't know anything about Yudkowsky and assumed that that he was just ripping off the joke in Sleeper where Margaret Keane was regarded as a great artist 200 years in the future.
|
# ? Feb 4, 2015 20:52 |
|
SolTerrasa posted:Not in those terms, no. Unlike Yudkowsky, I don't believe in the g factor, so even if I wanted to express an egotistical "level above mine" style thought about him, I'd have to say it in terms of skills, not in terms of raw intelligence. I'm just hedging against being wrong. I'm plenty proud of what I did; it would take a hell of a demo from Yud for me to say that he's a better AI researcher than me. And at least five years of sustained, excellent output for him to be the best I know personally. I wouldn't bet on it. Ok good, because questions of quantifying intelligence are all well and good, but I'm not going to lose sleep questioning the epistemological basis of the statement "SolTerrasa is smarter than Eliezer Yudkowsky."
|
# ? Feb 4, 2015 21:20 |
|
SolTerrasa posted:I'd try to transcribe it but I'm on a bus and no one would appreciate that. What he says posted:Well, as a matter of cold calculation, I decided that... eh, it's anime. [laughter]. So, nothing too interesting there.
|
# ? Feb 5, 2015 00:48 |
|
Big Yud, via SolTerrasa posted:Concepts like <he says some japanese>, meaning "I want to become stronger", are things that, ah, um, being exposed to the alternative eastern culture as found in anime, um, might have caused me to develop concepts of Self-improvement: an esoteric concept to be found only in anime. You heard it here first, guys!
|
# ? Feb 5, 2015 01:42 |
|
SolTerrasa posted:Well, I suppose that you could view it as a continuity of sort of reading the, you know, sort of the dribs and drabs of westernized eastern philosophy from Godel, Escher, Bach.
|
# ? Feb 5, 2015 01:47 |
|
gently caress this dude for missing the point of Gödel, Escher, Bach. E: gently caress, beaten.
|
# ? Feb 5, 2015 01:59 |
|
I'm very drunk, but had an insight... Lets say you started an organization whose goal was to build a godlike-AI... you've taken the money... but you don't like work, and this seems really hard. Why not just write a metric ton of words to the effect of "it would be super dangerous to build this thing I promised, so I won't until I finish mathwanking over here in the corner."
|
# ? Feb 5, 2015 06:00 |
|
su3su2u1 posted:I'm very drunk, but had an insight... This implies ever promising a real deliverable in the first place, so long as the terminators aren't stomping on your bones MIRI's doin' its job
|
# ? Feb 5, 2015 06:03 |
|
Sham bam bamina! posted:In the sense that anime comes from Asia and Hofstadter mentions Asian ideas somewhere in there, yes. He might as well try to connect it all to the band Asia at this point. tbf "Heat of the Moment" is a cultural masterpiece
|
# ? Feb 5, 2015 10:19 |
|
su3su2u1 posted:I'm very drunk, but had an insight... Because that stops you from getting more of the rubes' money. What you do is you pretend to get asymptotically closer to your goal. Every time you could reasonably be expected to hit it within a few years introduce another angle. The sunk cost fallacy will keep people who've fallen under your spell from ever pulling all the way out. They'll write apologetics for you, so you can focus on stacking those fat stacks.
|
# ? Feb 5, 2015 17:21 |
|
|
# ? May 4, 2024 20:44 |
|
I want to note one of the greatest contributions of LessWrong to the English language: the word phyg. Go ahead, Google it
|
# ? Feb 5, 2015 17:37 |