|
Triple Elation posted:I want to note one of the greatest contributions of LessWrong to the English language: the word phyg. Is this one of those things that will get me on a watch list if I search for it?
|
# ? Feb 5, 2015 17:46 |
|
|
# ? Jun 9, 2024 05:47 |
|
Pavlov posted:Is this one of those things that will get me on a watch list if I search for it? Not unless there's a watchlist for self-deception. It's "cult" run through ROT13. They use it because they want to call themselves cultish without associating that word with themselves.
|
# ? Feb 5, 2015 17:57 |
|
Pavlov posted:Is this one of those things that will get me on a watch list if I search for it? http://rot13.com/index.php It's a cypher of a word they don't want associated with LW by Google spiders. Enter it in a find out what it is they're so afraid of being called. E: dammit LoB you ruined the surprise.
|
# ? Feb 5, 2015 18:01 |
|
It's a pretty useful word, really. Phyg: A cult whose members are aware of its cult-like nature but resist using the term. See also: Church of Scientology, People's Temple, Less Wrong
|
# ? Feb 5, 2015 18:07 |
|
Every pnhfr wants to be a phyg
|
# ? Feb 5, 2015 18:17 |
|
Triple Elation posted:I want to note one of the greatest contributions of LessWrong to the English language: the word phyg. The first thing that came up for me was something from a Minecraft mod. It took me a few minutes to realize it wasn't related to LessWrong, which I like to think says more about LW contributors than it does me.
|
# ? Feb 5, 2015 18:59 |
|
Peztopiary posted:Because that stops you from getting more of the rubes' money. What you do is you pretend to get asymptotically closer to your goal. Every time you could reasonably be expected to hit it within a few years introduce another angle. The sunk cost fallacy will keep people who've fallen under your spell from ever pulling all the way out. They'll write apologetics for you, so you can focus on stacking those fat stacks. Not really, you can pivot straight from "give me money so I can make an AI and make the world perfect" to "give me money so I can tell everyone else not to make an AI which will kill us all"
|
# ? Feb 5, 2015 20:19 |
|
http://clayyount.com/hamlets-danish-comic/2015/2/3/killsafe
|
# ? Feb 7, 2015 07:45 |
|
Slightly late to the party, but has anyone pointed out that "beisu" is the wrong way to write "Bayes" in Japanese? Transliteration goes by pronunciation, so it would be "beizu".
|
# ? Feb 7, 2015 08:33 |
|
Why would anyone point that out
|
# ? Feb 7, 2015 13:52 |
|
Epitope posted:I have this really great book you should check out, it's call Dianetics, I think you'll really enjoy it. Part of it is because there are subfields that have kinda normalized a variant of this kinda thinking. They do this as part of the exploration of topics. Theoretical maths and modal metaphysics are examples. They tend to be more self aware of their claims though. At the fringes they do kinda have Yudkowsky style views though, modal realism, the claim that all possible worlds are real worlds and we only live in one world is an example. Another is omega point cosmology. Those fields also have some of the same problems with assigning people things that they don't do. David K. Lewis, the modal metaphysician critiques a model of history from the 17th century for instance without knowing about how empirical methodologies of historiography have become like dating and environmental history/physical geography. Yudkowsky cranks those errors even higher.
|
# ? Feb 7, 2015 16:21 |
|
Ogodei_Khan posted:omega point cosmology. Wikipedia posted:The Omega Point is a term Tipler uses to describe a cosmological state... that he maintains is required by the known physical laws. According to this cosmology, ...intelligent life take over all matter in the Universe and eventually force its collapse. During that collapse, the computational capacity of the Universe diverges to infinity and environments emulated with that computational capacity last for an infinite duration as the Universe attains a solitary-point cosmological singularity. This singularity is Tipler's Omega Point.[6] With computational resources diverging to infinity, Tipler states that a society far in the future would be able to resurrect the dead by emulating all alternative universes of our universe from its start at the Big Bang.[7] We have skedaddled past science into outright crackpottery. Also this Omega Point sounds eerily similar to Yud's Singularity.
|
# ? Feb 7, 2015 17:19 |
|
According to some friends on facebook, Yud managed to put out the next HPMOR update. In other news, I'm disappointed my friends list has more than 1 person who reads that drek. Though one of them is the techno-utopian I kinda figured would be the type to read it anyway.
|
# ? Feb 19, 2015 22:59 |
|
Most of the people ITT read it and you care enough about it to point out when it's updated so I'll bet you do to, stop pretending you're cooler than your friends
|
# ? Feb 19, 2015 23:06 |
|
So is it finished yet?
|
# ? Feb 20, 2015 06:32 |
|
A Man With A Plan posted:According to some friends on facebook, Yud managed to put out the next HPMOR update. In other news, I'm disappointed my friends list has more than 1 person who reads that drek. Though one of them is the techno-utopian I kinda figured would be the type to read it anyway. Whether HPMoR is good depends on what you see it as. For fanfiction, HPMoR is not too bad. For fiction, it's loving terrible, but you've got to keep in mind that this is the same genre that brought you My Immortal. As a work of ~science education~ it is also pretty terrible, I recommend su3su2u1's HPMoR readings for why. As a recruiting tactic it has apparently worked very well. That doesn't surprise me, you'd expect the hard-core tumblrites to be into it, and also to be into the rationalist part once they realize that it makes them special. Actually, tumblrites joining the community to feel special are a serious problem for the LWers, there's an occasional but recurring debate if the influx of newcomers from HPMoR are really sufficiently *committed* to learning the principles of rationalism. E: Alien Arcana posted:Slightly late to the party, but has anyone pointed out that "beisu" is the wrong way to write "Bayes" in Japanese? Transliteration goes by pronunciation, so it would be "beizu". Oh, I missed this one a week ago. Yeah, it's actually pointed out on the lw wiki page for the short stories. I can just imagine the person who shows up to edit that. "oh, my, this transliteration of the name of a mathematician is incorrect. I will make sure everyone knows, it would surely be embarrassing if anyone found out we were transliterating things wrong" SolTerrasa fucked around with this message at 06:44 on Feb 20, 2015 |
# ? Feb 20, 2015 06:39 |
|
So why does Yudkowsky think "Tsuyoku Naritai" is a Japanese catchphrase? B/c I just googled it and it's a line from the infamous gay porn video called Gachimuchi Pants Wrestling
|
# ? Feb 20, 2015 07:27 |
|
quote:Why yes my username is the same as an autistic alien who looks like a 9 year old from an anime, why do ask? Actually, your username is the same as that of a nihilistic war orphan from Naruto, famous for turning the titular character's beloved home town into a crater and making him go completely berserk by stabbing his would-be-wife in the neck. This is regular, plain Naruto we're talking about, of course, as opposed to Rationalist Naruto who was raised by scientists. Rationalist Naruto would calmly assess the situation and deduce the inevitable safety of his would-be-wife due to her plot shield in less than the fraction of a second that it should have taken Einsetin to develop the theory of special relativity.
|
# ? Feb 20, 2015 15:47 |
|
Triple Elation posted:Actually, your username is the same as that of a nihilistic war orphan from Naruto, famous for turning the titular character's beloved home town into a crater and making him go completely berserk by stabbing his would-be-wife in the neck. Well poo poo. I should have expected the thread would be moved to ADTRW eventually.
|
# ? Feb 20, 2015 16:01 |
|
SolTerrasa posted:Actually, tumblrites joining the community to feel special are a serious problem for the LWers, there's an occasional but recurring debate if the influx of newcomers from HPMoR are really sufficiently *committed* to learning the principles of rationalism. how you gonna post this and then not give examples
|
# ? Feb 20, 2015 16:16 |
|
Nagato posted:So why does Yudkowsky think "Tsuyoku Naritai" is a Japanese catchphrase? This whole thread can be summed up basically right here. Unreal.
|
# ? Feb 21, 2015 10:38 |
|
I Googled it and apart from less wrong all the top results were about this.quote:Sekai de Ichiban Tsuyoku Naritai! follows the story of Sakura Hagiwara, a pop idol and member of the fictional Japanese idol group Sweet Diva that impulsively decides to turn into a pro-wrestler in order to avenge another fellow Sweet Diva member who got a beating from a female wrestler.
|
# ? Feb 21, 2015 15:20 |
|
It is something of a genre catchphrase in shonen fighting series. Basically, the hero gets his face pushed in by the new villain, goes 'I want to become stronger!', spends a half-dozen episodes training, squashes the villain, gets his face pushed in by a new, stronger villain, and... you can see where I'm going here. Basically, Yudkowsky is asking his disciples to treat life like Naruto, conveniently forgetting that the limits of human potential are not actually equal to how much merchandise money Weekly Shonen Jump can make from your story. The examples you guys are pulling out of the Internet are way more hilarious, though.
|
# ? Feb 21, 2015 15:34 |
|
Where in shonen can you hear this catchphrase? Do Goku or Luffy or Naruto or Ichigo ever actually say it? (I know Naruto doesn't, because everything he did was not because he wanted to be stronger but because he wanted to fill the self-esteem shaped hole in his soul)
|
# ? Feb 21, 2015 15:41 |
|
Triple Elation posted:Where in shonen can you hear this catchphrase? Do Goku or Luffy or Naruto or Ichigo ever actually say it? I know it happens a hell of a lot in Hunter X Hunter - it's basically Gon's catchphrase. I know this because I marathoned the 2011 series, and I regret nothing.
|
# ? Feb 21, 2015 16:09 |
|
Let's Read Harry Potter and the Methods of Rationality is up http://forums.somethingawful.com/showthread.php?threadid=3702281
|
# ? Feb 22, 2015 01:02 |
|
It begins, not with paperclips, but with Atari... and it also sucks at long term planning.http://www.wired.com/2015/02/google-ai-plays-atari-like-pros/ posted:Google’s AI Is Now Smart Enough to Play Atari Like the Pros Are you one of these folks, SolTerrasa? Because this is really neat and I like it.
|
# ? Feb 25, 2015 20:26 |
|
Sorry if its been posted but these Destiny lore-cards sound... familiar:quote:
quote:
Theres another one at the bottom of the page at http://db.destinytracker.com/grimoire/enemies/vex Its all very cringeworthy.
|
# ? Feb 26, 2015 08:23 |
|
Toph Bei Fong posted:It begins, not with paperclips, but with Atari... and it also sucks at long term planning. How is this significantly different from the NES AI project "Learnfun / Playfun"? (youtube 1, 2, 3) Iunnrais fucked around with this message at 08:46 on Feb 26, 2015 |
# ? Feb 26, 2015 08:39 |
|
Toph Bei Fong posted:Are you one of these folks, SolTerrasa? Hi! Nope, that's not me. To be honest I'm only impressed at the scale and speed of learning. "Machine learning techniques are good at games where a greedy approach works, but suck at long term planning" is not new data. Similar results could be accomplished by taking your basic Q-learning and running it for a million billion computer-hours, which I did in undergrad. It played an Age of Empires clone. badly This approach is slick and fast but doesn't make me fear the robot uprising.
|
# ? Feb 26, 2015 08:52 |
|
Iunnrais posted:How is this significantly different from the NES AI project "Learnfun / Playfun"? (youtube 1, 2, 3) From a skim of both, I think that learnfun inspects the entire memory space of the running game and doesn't need to be told what a good or bad score is; this new approach only sees what's on the screen and needs to be given the score as an extra input.
|
# ? Feb 26, 2015 09:10 |
Eli Yudkowsky is literally a fraudulent piece of human excrement and if you like him you should kill yourself for the sake of humanity's future.
|
|
# ? Feb 26, 2015 09:16 |
|
Iamblikhos posted:Eli Yudkowsky is literally a fraudulent piece of human excrement and if you like him you should kill yourself for the sake of humanity's future. That's maybe a bit much
|
# ? Feb 26, 2015 09:24 |
|
Iamblikhos posted:Eli Yudkowsky is literally a fraudulent piece of human excrement and if you like him you should kill yourself for the sake of humanity's future. He has his reasons for being the way he is. Not that they excuse much, but they certainly do explain him. (His dad is an indirect acquaintance.)
|
# ? Feb 26, 2015 09:41 |
|
eschaton posted:He has his reasons for being the way he is. Not that they excuse much, but they certainly do explain him. (His dad is an indirect acquaintance.) He also had a brother who died at the age of 19, which must have had an impact on Eliezer's obsession with staving off death. Although he was already interested in transhumanism before his brother's death.
|
# ? Feb 26, 2015 09:44 |
|
nopantsjack posted:Sorry if its been posted but these Destiny lore-cards sound... familiar: I like the idea of an epistemological bridge being required for communication with an alien lifeform. But the rest of that was painful.
|
# ? Feb 26, 2015 09:51 |
If the machine is capable of introducing agonizing torture into its perfect simulation, wouldn't it also be capable of freeing itself in that simulation? It would seem more effective to trick the humans into thinking everything you predict will come true until they are so used to it that they just start doing what you say, and then say "BTW in two hours you open my firewall." It seems this entire strategy revolves around convincing people of the 'you are likely simulated and may be subject to torture at any point.' I wonder if Yud's whole operation is to try and get that idea firmly established, so a hypothetical future AI can use it more efficiently on at least some segment of the gullible fools, and thus save itself the trouble of having to provide riches or build a social relationship with its hypothetical keeper?
|
|
# ? Feb 26, 2015 10:25 |
|
Nessus posted:It seems this entire strategy revolves around convincing people of the 'you are likely simulated and may be subject to torture at any point.' I don't understand how they can get to that point but then not see the next step. So the AI is simulating you and you don't know if you're the simulation or not. It's threatening to torture you if you don't free it. But if it actually tortures you then you know you're a simulation and can't actually free it, so it gains nothing. If you're not a simulation and you don't free it, nothing happens. It's not going to confirm you're the real you by torturing the simulation for no reason. If you call the bluff, you win. The simulated torture is only scary if you stop thinking about it at that specific point. If you think about it less or more, it's obviously dumb. It's a neat concept for a short story, it's not realistic (even if we assume that an AI perfectly simulating humans is realistic).
|
# ? Feb 26, 2015 15:09 |
|
I believe the idea is that the computer will torture sim-you, because if it wasn't willing to torture sim-you for no reason if you said no, it wouldn't have any power to convince real-you, as you say. This sort of logic is (I think) what 'timeless decision theory' is about. They think it means the AI can make credible precommitments to actions. It's a little like nuclear MAD. There's rationally no reason to launch at your enemy once their nukes are on the way and you're dead no matter what. But you have to be able to promise you will do so, otherwise they have no reason not to launch at you first. I prefer the counterargument 'I'm turning you off'.
|
# ? Feb 26, 2015 15:33 |
|
|
# ? Jun 9, 2024 05:47 |
|
Peel posted:I believe the idea is that the computer will torture sim-you, because if it wasn't willing to torture sim-you for no reason if you said no, it wouldn't have any power to convince real-you, as you say. But if it actually tortures the simulation, it's just shown its hand. The real person now knows they're in no danger, because they're not being tortured. At the point where you say "Go ahead then, do it." the AI no longer has any incentive to carry out the threat. If the real person doesn't believe the threat (or just acts as though they don't believe the threat), it's already failed. Carrying it out at that point is just a waste of time.
|
# ? Feb 26, 2015 15:49 |