Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Triple Elation posted:

I want to note one of the greatest contributions of LessWrong to the English language: the word phyg.

Go ahead, Google it

Is this one of those things that will get me on a watch list if I search for it?

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Pavlov posted:

Is this one of those things that will get me on a watch list if I search for it?

Not unless there's a watchlist for self-deception. It's "cult" run through ROT13. They use it because they want to call themselves cultish without associating that word with themselves.

Political Whores
Feb 13, 2012

Pavlov posted:

Is this one of those things that will get me on a watch list if I search for it?

http://rot13.com/index.php

It's a cypher of a word they don't want associated with LW by Google spiders. Enter it in a find out what it is they're so afraid of being called.

E: dammit LoB you ruined the surprise.

Chamale
Jul 11, 2010

I'm helping!



It's a pretty useful word, really.

Phyg: A cult whose members are aware of its cult-like nature but resist using the term. See also: Church of Scientology, People's Temple, Less Wrong

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
Every pnhfr wants to be a phyg

Freemason Rush Week
Apr 22, 2006

Triple Elation posted:

I want to note one of the greatest contributions of LessWrong to the English language: the word phyg.

Go ahead, Google it

The first thing that came up for me was something from a Minecraft mod. It took me a few minutes to realize it wasn't related to LessWrong, which I like to think says more about LW contributors than it does me.

Qwertycoatl
Dec 31, 2008

Peztopiary posted:

Because that stops you from getting more of the rubes' money. What you do is you pretend to get asymptotically closer to your goal. Every time you could reasonably be expected to hit it within a few years introduce another angle. The sunk cost fallacy will keep people who've fallen under your spell from ever pulling all the way out. They'll write apologetics for you, so you can focus on stacking those fat stacks.

Not really, you can pivot straight from "give me money so I can make an AI and make the world perfect" to "give me money so I can tell everyone else not to make an AI which will kill us all"

Tupperwarez
Apr 4, 2004

"phphphphphphpht"? this is what you're going with?

you sure?
http://clayyount.com/hamlets-danish-comic/2015/2/3/killsafe

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.
Slightly late to the party, but has anyone pointed out that "beisu" is the wrong way to write "Bayes" in Japanese? Transliteration goes by pronunciation, so it would be "beizu".

Boing
Jul 12, 2005

trapped in custom title factory, send help
Why would anyone point that out

Ogodei_Khan
Feb 28, 2009

Epitope posted:

I have this really great book you should check out, it's call Dianetics, I think you'll really enjoy it.

But seriously. How is being a pompous twat about how to do science, while not knowing how we do science, not objectionable? He says it's a big problem we do things no one does, then suggests groundbreaking new practices that are already commonplace.

Part of it is because there are subfields that have kinda normalized a variant of this kinda thinking. They do this as part of the exploration of topics. Theoretical maths and modal metaphysics are examples. They tend to be more self aware of their claims though. At the fringes they do kinda have Yudkowsky style views though, modal realism, the claim that all possible worlds are real worlds and we only live in one world is an example. Another is omega point cosmology. Those fields also have some of the same problems with assigning people things that they don't do. David K. Lewis, the modal metaphysician critiques a model of history from the 17th century for instance without knowing about how empirical methodologies of historiography have become like dating and environmental history/physical geography. Yudkowsky cranks those errors even higher.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Ogodei_Khan posted:

omega point cosmology.

Wikipedia posted:

The Omega Point is a term Tipler uses to describe a cosmological state... that he maintains is required by the known physical laws. According to this cosmology, ...intelligent life take over all matter in the Universe and eventually force its collapse. During that collapse, the computational capacity of the Universe diverges to infinity and environments emulated with that computational capacity last for an infinite duration as the Universe attains a solitary-point cosmological singularity. This singularity is Tipler's Omega Point.[6] With computational resources diverging to infinity, Tipler states that a society far in the future would be able to resurrect the dead by emulating all alternative universes of our universe from its start at the Big Bang.[7]

We have skedaddled past science into outright crackpottery. Also this Omega Point sounds eerily similar to Yud's Singularity.

A Man With A Plan
Mar 29, 2010
Fallen Rib
According to some friends on facebook, Yud managed to put out the next HPMOR update. In other news, I'm disappointed my friends list has more than 1 person who reads that drek. Though one of them is the techno-utopian I kinda figured would be the type to read it anyway.

A Wizard of Goatse
Dec 14, 2014

Most of the people ITT read it and you care enough about it to point out when it's updated so I'll bet you do to, stop pretending you're cooler than your friends

Telarra
Oct 9, 2012

So is it finished yet?

SolTerrasa
Sep 2, 2011


A Man With A Plan posted:

According to some friends on facebook, Yud managed to put out the next HPMOR update. In other news, I'm disappointed my friends list has more than 1 person who reads that drek. Though one of them is the techno-utopian I kinda figured would be the type to read it anyway.

Whether HPMoR is good depends on what you see it as. For fanfiction, HPMoR is not too bad. For fiction, it's loving terrible, but you've got to keep in mind that this is the same genre that brought you My Immortal. As a work of ~science education~ it is also pretty terrible, I recommend su3su2u1's HPMoR readings for why.

As a recruiting tactic it has apparently worked very well. That doesn't surprise me, you'd expect the hard-core tumblrites to be into it, and also to be into the rationalist part once they realize that it makes them special.

Actually, tumblrites joining the community to feel special are a serious problem for the LWers, there's an occasional but recurring debate if the influx of newcomers from HPMoR are really sufficiently *committed* to learning the principles of rationalism.

E:

Alien Arcana posted:

Slightly late to the party, but has anyone pointed out that "beisu" is the wrong way to write "Bayes" in Japanese? Transliteration goes by pronunciation, so it would be "beizu".

Oh, I missed this one a week ago. Yeah, it's actually pointed out on the lw wiki page for the short stories. I can just imagine the person who shows up to edit that. "oh, my, this transliteration of the name of a mathematician is incorrect. I will make sure everyone knows, it would surely be embarrassing if anyone found out we were transliterating things wrong"

SolTerrasa fucked around with this message at 06:44 on Feb 20, 2015

Nagato
Apr 26, 2011

Why yes my username is the same as an autistic alien who looks like a 9 year old from an anime, why do ask?
:nyoron:
So why does Yudkowsky think "Tsuyoku Naritai" is a Japanese catchphrase?

B/c I just googled it and it's a line from the infamous gay porn video called Gachimuchi Pants Wrestling

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

quote:

Why yes my username is the same as an autistic alien who looks like a 9 year old from an anime, why do ask?

Actually, your username is the same as that of a nihilistic war orphan from Naruto, famous for turning the titular character's beloved home town into a crater and making him go completely berserk by stabbing his would-be-wife in the neck.

This is regular, plain Naruto we're talking about, of course, as opposed to Rationalist Naruto who was raised by scientists. Rationalist Naruto would calmly assess the situation and deduce the inevitable safety of his would-be-wife due to her plot shield in less than the fraction of a second that it should have taken Einsetin to develop the theory of special relativity.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Triple Elation posted:

Actually, your username is the same as that of a nihilistic war orphan from Naruto, famous for turning the titular character's beloved home town into a crater and making him go completely berserk by stabbing his would-be-wife in the neck.

This is regular, plain Naruto we're talking about, of course, as opposed to Rationalist Naruto who was raised by scientists. Rationalist Naruto would calmly assess the situation and deduce the inevitable safety of his would-be-wife due to her plot shield in less than the fraction of a second that it should have taken Einsetin to develop the theory of special relativity.

Well poo poo. I should have expected the thread would be moved to ADTRW eventually.

DAD LOST MY IPOD
Feb 3, 2012

Fats Dominar is on the case


SolTerrasa posted:

Actually, tumblrites joining the community to feel special are a serious problem for the LWers, there's an occasional but recurring debate if the influx of newcomers from HPMoR are really sufficiently *committed* to learning the principles of rationalism.

how you gonna post this and then not give examples

Armani
Jun 22, 2008

Now it's been 17 summers since I've seen my mother

But every night I see her smile inside my dreams

Nagato posted:

So why does Yudkowsky think "Tsuyoku Naritai" is a Japanese catchphrase?

B/c I just googled it and it's a line from the infamous gay porn video called Gachimuchi Pants Wrestling

This whole thread can be summed up basically right here. Unreal.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
I Googled it and apart from less wrong all the top results were about this.

quote:

Sekai de Ichiban Tsuyoku Naritai! follows the story of Sakura Hagiwara, a pop idol and member of the fictional Japanese idol group Sweet Diva that impulsively decides to turn into a pro-wrestler in order to avenge another fellow Sweet Diva member who got a beating from a female wrestler.

Darth Walrus
Feb 13, 2012
It is something of a genre catchphrase in shonen fighting series. Basically, the hero gets his face pushed in by the new villain, goes 'I want to become stronger!', spends a half-dozen episodes training, squashes the villain, gets his face pushed in by a new, stronger villain, and... you can see where I'm going here. Basically, Yudkowsky is asking his disciples to treat life like Naruto, conveniently forgetting that the limits of human potential are not actually equal to how much merchandise money Weekly Shonen Jump can make from your story.

The examples you guys are pulling out of the Internet are way more hilarious, though.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
Where in shonen can you hear this catchphrase? Do Goku or Luffy or Naruto or Ichigo ever actually say it?

(I know Naruto doesn't, because everything he did was not because he wanted to be stronger but because he wanted to fill the self-esteem shaped hole in his soul)

Darth Walrus
Feb 13, 2012

Triple Elation posted:

Where in shonen can you hear this catchphrase? Do Goku or Luffy or Naruto or Ichigo ever actually say it?

(I know Naruto doesn't, because everything he did was not because he wanted to be stronger but because he wanted to fill the self-esteem shaped hole in his soul)

I know it happens a hell of a lot in Hunter X Hunter - it's basically Gon's catchphrase. I know this because I marathoned the 2011 series, and I regret nothing.

Tunicate
May 15, 2012

Let's Read Harry Potter and the Methods of Rationality is up
http://forums.somethingawful.com/showthread.php?threadid=3702281

Toph Bei Fong
Feb 29, 2008



It begins, not with paperclips, but with Atari... and it also sucks at long term planning.

http://www.wired.com/2015/02/google-ai-plays-atari-like-pros/ posted:

Google’s AI Is Now Smart Enough to Play Atari Like the Pros
By Robert McMillan
02.25.15

Last year Google shelled out an estimated $400 million for a little-known artificial intelligence company called DeepMind. Since then, the company has been pretty tight-lipped about what’s been going on behind DeepMind’s closed doors, but here’s one thing we know for sure: There’s a professional videogame tester who’s pitted himself against DeepMind’s AI software in a kind of digital battle royale.

The battlefield was classic videogames. And according to new research published today in the science magazine Nature, Google’s software did pretty well, smoking its human competitor in a range of Atari 2600 games like Breakout, Video Pinball, and Space Invaders and playing at pretty close to the human’s level most of the time.

Google didn’t spend hundreds of millions of dollars because it’s expecting an Atari revival, but this new research does offer a hint as to what Google hopes to achieve with DeepMind. The DeepMind software uses two AI techniques—one called deep learning; and the other, deep reinforcement learning. Deep-learning techniques are already widely used at Google, and also at companies such as Facebook and Microsoft. They help with perception—helping Android understand what you’re saying, and Facebook know who’s photo you just uploaded. But until now, nobody has really matched Google’s success at merging deep learning with reinforcement learning—those are algorithms that make the software improve over time, using a system of rewards.

By merging these two techniques, Google has built a “a general-learning algorithm that should be applicable to many other tasks,” says Koray Kavukcuoglu, a Google researcher. The DeepMind team says they’re still scoping out the possibilities, but clearly improved search and smartphone apps are on the radar.

But there are other interesting areas as well. Google engineering guru Jeff Dean says that AI techniques being explored by Google—and other companies—could ultimately benefit the kinds of technologies that are being incubated in the Google X research labs. “There are potential application in robots and self-driving-car kinds of things,” he says. “Those are all things where computer vision is pretty important.”

Google says that its AI software, which it’s dubbed the “Deep Q network agent,” got 75 percent of the score of its professional tester in 29 of the 49 games it tried out. It did best in Video Pinball.

Deep Q works best when it lives in the moment—bouncing balls in Break Out, or trading blows in video boxing—but it doesn’t do so well when it needs to plan things out in the long-term: climbing down ladders and then jumping skeletons in order to retrieve keys in Montezuma’s Revenge, for example. Poor old Deep Q scored a big fat zero in that game.

But as it improves, the DeepMind work “could be the driving technology for robotics,” says Itamar Arel, an artificial intelligence researcher who, like the DeepMind folks, is working on ways to merge deep learning with deep reinforcement techniques. He believes that DeepMind’s technology is about 18 to 24 months away from the point where it could be used to experiment with real-world robots—and Google has its fair share of robots to test on, including the dog-like Boston Robotics machines it acquired in 2013.

The Nature paper doesn’t describe any new technical breakthroughs, but it shows what happens when the DeepMind techniques are used on a much broader scale. “We used much bigger neural networks, we came up with better training regimes… and trained the systems for longer,” says Demis Hassabis, DeepMind’s founder. In 2013, DeepMind described “very early preliminary sample results,” he says, “these are the full results complete with a whole bunch of careful controls and benchmarks.”

Hassabis won’t tell us whether Google is running robot simulations too, but it’s clear that the Atari 2600 work is only the beginning. “I can’t really comment on our current work, but we are indeed running simulations of all kinds of games and environments,” he says.

Are you one of these folks, SolTerrasa? :tinfoil:
Because this is really neat and I like it.

Communist Thoughts
Jan 7, 2008

Our war against free speech cannot end until we silence this bronze beast!


Sorry if its been posted but these Destiny lore-cards sound... familiar:

quote:


ESI: Maya, I need your help. I don't know how to fix this.

SUNDARESH: What is it? Chioma. Sit. Tell me.

ESI: I've figured out what's happening inside the specimen.

SUNDARESH: Twelve? The operational Vex platform? That's incredible! You must know what this means - ah, so. It's not good, or you'd be on my side of the desk. And it's not urgent, or you'd already have evacuated the site. Which means...

ESI: I have a working interface with the specimen's internal environment. I can see what it's thinking.

SUNDARESH: In metaphorical terms, of course. The cognitive architectures are so -

ESI: No. I don't need any kind of epistemology bridge.

SUNDARESH: Are you telling me it's human? A human merkwelt? Human qualia?

ESI: I'm telling you it's full of humans. It's thinking about us.

SUNDARESH: About - oh no.

ESI: It's simulating us. Vividly. Elaborately. It's running a spectacularly high-fidelity model of a Collective research team studying a captive Vex entity.

SUNDARESH:...how deep does it go?

ESI: Right now the simulated Maya Sundaresh is meeting with the simulated Chioma Esi to discuss an unexpected problem.

[indistinct sounds]

SUNDARESH: There's no divergence? That's impossible. It doesn't have enough information.

ESI: It inferred. It works from what it sees and it infers the rest. I know that feels unlikely. But it obviously has capabilities we don't. It may have breached our shared virtual workspace...the neural links could have given it data...

SUNDARESH: The simulations have interiority? Subjectivity?

ESI: I can't know that until I look more closely. But they act like us.

SUNDARESH: We're inside it. By any reasonable philosophical standard, we are inside that Vex.

ESI: Unless you take a particularly ruthless approach to the problem of causal forks: yes. They are us.

SUNDARESH: Call a team meeting.

ESI: The other you has too.

quote:


SUNDARESH: So that's the situation as we know it.

ESI: To the best of my understanding.

SHIM: Well I'll be a [profane] [profanity]. This is extremely [profane]. That thing has us over a barrel.

SUNDARESH: Yeah. We're in a difficult position.

DUANE-MCNIADH: I don't understand. So it's simulating us? It made virtual copies of us? How does that give it power?

ESI: It controls the simulation. It can hurt our simulated selves. We wouldn't feel that pain, but rationally speaking, we have to treat an identical copy's agony as identical to our own.

SUNDARESH: It's god in there. It can simulate our torment. Forever. If we don't let it go, it'll put us through hell.

DUANE-MCNIADH: We have no causal connection to the mind state of those sims. They aren't us. Just copies. We have no obligation to them.

ESI: You can't seriously - your OWN SELF -

SHIM: [profane] idiot. Think. Think. If it can run one simulation, maybe it can run more than one. And there will only ever be one reality. Play the odds.

DUANE-MCNIADH: Oh...uh oh.

SHIM: Odds are that we aren't our own originals. Odds are that we exist in one of the Vex simulations right now.

ESI: I didn't think of that.

SUNDARESH: [indistinct percussive sound]

Theres another one at the bottom of the page at http://db.destinytracker.com/grimoire/enemies/vex

Its all very cringeworthy.

Iunnrais
Jul 25, 2007

It's gaelic.

Toph Bei Fong posted:

It begins, not with paperclips, but with Atari... and it also sucks at long term planning.


Are you one of these folks, SolTerrasa? :tinfoil:
Because this is really neat and I like it.

How is this significantly different from the NES AI project "Learnfun / Playfun"? (youtube 1, 2, 3)

Iunnrais fucked around with this message at 08:46 on Feb 26, 2015

SolTerrasa
Sep 2, 2011


Toph Bei Fong posted:

Are you one of these folks, SolTerrasa? :tinfoil:
Because this is really neat and I like it.

Hi! Nope, that's not me. To be honest I'm only impressed at the scale and speed of learning. "Machine learning techniques are good at games where a greedy approach works, but suck at long term planning" is not new data. Similar results could be accomplished by taking your basic Q-learning and running it for a million billion computer-hours, which I did in undergrad. It played an Age of Empires clone. badly

This approach is slick and fast but doesn't make me fear the robot uprising.

pseudopresence
Mar 3, 2005

I want to get online...
I need a computer!

Iunnrais posted:

How is this significantly different from the NES AI project "Learnfun / Playfun"? (youtube 1, 2, 3)

From a skim of both, I think that learnfun inspects the entire memory space of the running game and doesn't need to be told what a good or bad score is; this new approach only sees what's on the screen and needs to be given the score as an extra input.

Iamblikhos
Jun 9, 2013

IRONKNUCKLE PERMA-BANNED! CHALLENGES LIBERALS TO 10-TOPIC POLITICAL DEBATE! READ HERE
Eli Yudkowsky is literally a fraudulent piece of human excrement and if you like him you should kill yourself for the sake of humanity's future.

SolTerrasa
Sep 2, 2011


Iamblikhos posted:

Eli Yudkowsky is literally a fraudulent piece of human excrement and if you like him you should kill yourself for the sake of humanity's future.

That's maybe a bit much

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Iamblikhos posted:

Eli Yudkowsky is literally a fraudulent piece of human excrement and if you like him you should kill yourself for the sake of humanity's future.

He has his reasons for being the way he is. Not that they excuse much, but they certainly do explain him. (His dad is an indirect acquaintance.)

Chamale
Jul 11, 2010

I'm helping!



eschaton posted:

He has his reasons for being the way he is. Not that they excuse much, but they certainly do explain him. (His dad is an indirect acquaintance.)

He also had a brother who died at the age of 19, which must have had an impact on Eliezer's obsession with staving off death. Although he was already interested in transhumanism before his brother's death.

Political Whores
Feb 13, 2012

nopantsjack posted:

Sorry if its been posted but these Destiny lore-cards sound... familiar:



Theres another one at the bottom of the page at http://db.destinytracker.com/grimoire/enemies/vex

Its all very cringeworthy.

I like the idea of an epistemological bridge being required for communication with an alien lifeform. But the rest of that was painful.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



If the machine is capable of introducing agonizing torture into its perfect simulation, wouldn't it also be capable of freeing itself in that simulation? It would seem more effective to trick the humans into thinking everything you predict will come true until they are so used to it that they just start doing what you say, and then say "BTW in two hours you open my firewall."

It seems this entire strategy revolves around convincing people of the 'you are likely simulated and may be subject to torture at any point.' I wonder if Yud's whole operation is to try and get that idea firmly established, so a hypothetical future AI can use it more efficiently on at least some segment of the gullible fools, and thus save itself the trouble of having to provide riches or build a social relationship with its hypothetical keeper?

Tiggum
Oct 24, 2007

Your life and your quest end here.


Nessus posted:

It seems this entire strategy revolves around convincing people of the 'you are likely simulated and may be subject to torture at any point.'

I don't understand how they can get to that point but then not see the next step. So the AI is simulating you and you don't know if you're the simulation or not. It's threatening to torture you if you don't free it. But if it actually tortures you then you know you're a simulation and can't actually free it, so it gains nothing. If you're not a simulation and you don't free it, nothing happens. It's not going to confirm you're the real you by torturing the simulation for no reason. If you call the bluff, you win.

The simulated torture is only scary if you stop thinking about it at that specific point. If you think about it less or more, it's obviously dumb. It's a neat concept for a short story, it's not realistic (even if we assume that an AI perfectly simulating humans is realistic).

Peel
Dec 3, 2007

I believe the idea is that the computer will torture sim-you, because if it wasn't willing to torture sim-you for no reason if you said no, it wouldn't have any power to convince real-you, as you say. This sort of logic is (I think) what 'timeless decision theory' is about. They think it means the AI can make credible precommitments to actions.

It's a little like nuclear MAD. There's rationally no reason to launch at your enemy once their nukes are on the way and you're dead no matter what. But you have to be able to promise you will do so, otherwise they have no reason not to launch at you first.



I prefer the counterargument 'I'm turning you off'.

Adbot
ADBOT LOVES YOU

Tiggum
Oct 24, 2007

Your life and your quest end here.


Peel posted:

I believe the idea is that the computer will torture sim-you, because if it wasn't willing to torture sim-you for no reason if you said no, it wouldn't have any power to convince real-you, as you say.

But if it actually tortures the simulation, it's just shown its hand. The real person now knows they're in no danger, because they're not being tortured. At the point where you say "Go ahead then, do it." the AI no longer has any incentive to carry out the threat. If the real person doesn't believe the threat (or just acts as though they don't believe the threat), it's already failed. Carrying it out at that point is just a waste of time.

  • Locked thread