Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
su3su2u1
Apr 23, 2014
So the Slatestarcodex guy responded to something I wrote on tumblr with this

http://slatestarscratchpad.tumblr.com/post/99067848611/su3su2u1-based-on-their-output-over-the-last

quote:

Possibly you’re more active in the fanfic-reading community than in the Lob’s Theorem-circumventing community?

Over the last decade:
1. A whole bunch of very important thought leaders including Stephen Hawking, Elon Musk, Bill Gates, Max Tegmark, and Peter Thiel have publicly stated they think superintelligent AI is a major risk. Hawking specifically namedropped MIRI; Tegmark and Thiel have met with MIRI leadership and been convinced by them. MIRI were just about the first people pushing this theory, and they’ve successfully managed to spread it to people who can do something about it.

2. Various published papers, conference presentations, and chapters in textbooks on both social implications of AI and mathematical problems relating to AI self-improvement and decision theory. Some of this work has been receiving positive attention in the wider mathematical logic community - see for example here

3. MIT just started the Future of Life Institute, which includes basically a who’s who of world-famous scientists. Although I can’t prove MIRI made this happen, I do know of that of FLI’s five founders I met three at CFAR workshops a couple years before, one is a long-time close friend of Michael Vassar’s, and I saw another at Raymond’s New York Solstice.

4. A suspicious number of MIRI members have gone on to work on/help lead various AI-related projects at Google.

5. Superintelligence by Bostrom was an NYT bestseller reviewed in the Guardian, the Telegraph, the Economist, Salon, and the Financial Times. Eliezer gets cited just about every other page,and in MIRI HQ there is a two-way videoscreen link from them to Nick Bostrom’s office in Oxford because they coordinate so much.Searching the book’s bibliography for citations of MIRI people I find Stuart Armstrong, Kaj Sotala, Paul Christiano, Wei Dai, Peter de Blanc, Nick Hay, Jeff Kaufman, Roko Mijic, Luke Muehlhauser, Carl Shulman, Michael Vassar, and nine different Eliezer publications.

My impression as an outsider who nevertheless gets to talk to a lot of people on the inside is that their two big goals are to work on a certain abstruse subfield of math, and to network really deeply into academia and Silicon Valley so that their previously fringe AI ideas get talked about in universities, mainstream media, and big tech companies and their supporters end up highly placed in all of these.

As one of the many people who doesn’t understand their math, I can’t comment on that. But the networking - well, I don’t know if it’s just an idea whose time has come, or the zeitgeist, or what - but I bet that ten years ago, I could have made you bet me at any odds that this weird fringe theory called “Friendly AI” invented by a guy with no college degree wouldn’t be on the lips of Elon Musk, Stephen Hawking, half of Google’s AI department, institutes at MIT and Oxford, and scattered throughout a best-selling book.

Networking is by its nature kind of invisible except for the results, but the results speak for themselves. As such, I’d say they’re doing a pretty impressive job with the small amount of money we give them.

Anyone want to weigh in on a response? In particular, SolTerrasa how do you feel about the "suspicious number" of former MIRI employees "leading AI projects" at google?

Adbot
ADBOT LOVES YOU

The Vosgian Beast
Aug 13, 2011

Business is slow
I'm just kind of annoyed LW has to pretend Peter Thiel isn't a terrible, terrible, terrible human being because he gives them money.

Spazzle
Jul 5, 2003

As an aside, "thought leaders" is the most disgusting term to come out of the last decade.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

So the Slatestarcodex guy responded to something I wrote on tumblr with this

http://slatestarscratchpad.tumblr.com/post/99067848611/su3su2u1-based-on-their-output-over-the-last


Anyone want to weigh in on a response? In particular, SolTerrasa how do you feel about the "suspicious number" of former MIRI employees "leading AI projects" at google?

I definitely do, but I'm kinda busy working on AI projects at Google at the moment. Maybe tonight?

sat on my keys!
Oct 2, 2014

How many of these supposed "thought leaders" have published any academic work in CS/pure math/C&O, let alone AI? I really don't get how Stephen Hawking's opinion on "Not-QFT-or-GR subject X" is supposed to be more valuable to me than any other person's with a PhD?

Also the fact that MIRI people almost exclusively publish in vanity journals is kind of suggestive in a bad way.

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

bartlebyshop posted:

How many of these supposed "thought leaders" have published any academic work in CS/pure math/C&O, let alone AI? I really don't get how Stephen Hawking's opinion on "Not-QFT-or-GR subject X" is supposed to be more valuable to me than any other person's with a PhD?

Also the fact that MIRI people almost exclusively publish in vanity journals is kind of suggestive in a bad way.

Let's go a step towards what they try to do.
How many have proper philosophical chops?

sat on my keys!
Oct 2, 2014

Rigged Death Trap posted:

Let's go a step towards what they try to do.
How many have proper philosophical chops?

Being a philosophically ignorant rube myself, I felt this claim would be harder for me to evaluate. Crazy bullshit gets published in PRB all the time, and my assumption is similar things can happen in philosophy. But I trust my CS/physics bullshit sensors enough to be able to say that this list of "thought leaders" is about as compelling as if someone said "well, CLRS think my institute devoted to showing that high Tc cuprates are actually s-wave superconductors is awesomesauce, give me cashmoney" (to use an example relevant to me).

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I definitely do, but I'm kinda busy working on AI projects at Google at the moment. Maybe tonight?

I'll hold off replying until you have a chance to weigh in.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out
Way to disappear Ray Kurzweil, Slate Star Codex dude.

MizPiz
May 29, 2013

by Athanatos

Rigged Death Trap posted:

Let's go a step towards what they try to do.
How many have proper philosophical chops?

I think it's safe to assume that to them philosophy as academic field is nothing but a bunch of people with varying degrees of intellegence churning out whatever inane bullshit they can to scam money from their followers and inflate their own ego.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

I'll hold off replying until you have a chance to weigh in.

This post has been edited substantially from the original version; it's really hard to talk about Google, which is part of the problem.

Thanks! It might not be as enlightening as you hope, but here goes.

I'm going to start by talking about the Google bits. Standard disclaimers: I speak for myself, not Google, I'm just some dude who works there and I'm not PR-approved.

For this to make sense you have to understand a couple of things about Google. Number one is that we don't get orders from on high very often. We might get a goal, like "increase the quality of search results by such-and-such an amount" from on high, and then that will filter down through the levels and be increasingly refined by people who are closer and closer to the problem. I am seven levels removed from Larry, but only three levels worth of goal-refinement. The goals for people at the very leaf nodes of the org tree are still often freakishly abstract, and you can use pretty much any kind of solution you want. I've got a coworker who is really infatuated with ordering theory. It turns out that most of our problems can be solved with partial orders. You have to wrap them in a hundred other things before they fix all your problems, but they're the core math, the core solution. I personally really like functional transformations, and so I solve most problems by writing essentially pure mapping functions and running them in massive parallel over bucketloads of data. Last quarter, I decided I really liked AI and ML, so I solved a bunch of problems with AI and ML. That's what I mean when I say I'm working on AI projects at Google; it's not like I'm trying to build a sentient lifeform or whatever. So he might be telling the truth, but it's not likely that it means anything.

So let's take "leading AI projects at Google" to mean "works at Google, likes AI". The latter we can take as a given for anyone who gave up years of their life to work at MIRI. The former is harder to talk about here. Basically everything at Google is confidential in one way or another, so I probably can't even confirm what people are working on, or what our AI projects are, or whatever. This is why the MIRI people think we have all these AI projects; because they would if they had our nigh-unlimited resources, and no one will ever deny it, so therefore these projects exist.

I guess I need something much more specific from him before I can actually make a real comment. I don't really understand what he's implying. Is he implying that Google is poaching Friendly AI researchers for our own FAI project? Is he implying that the people who work on FAI are mostly smart enough to work at Google? I have very different things to say about those two things.

Needless to say, we're probably not poaching people for Friendly AI. If we were I think I'd have noticed. Though it is loving hilarious that he noticed "people are leaving MIRI for Google" and didn't connect it to "because MIRI is a poo poo organization to work for if you're brilliant, and Google isn't." Instead in his head it's because they've been poached to work on Google's own friendly AI. :tinfoil:

Things I would like include names of former MIRI members who work at Google now (since I couldn't find a historical list) and a list of things they think Google is working on so I can laugh at it. I'm unlikely to be able to respond to it, though, so it won't help for your Tumblr war (please don't have a Tumblr war, and if you do, please don't involve me).

Also, I want to point out that of the many people at Google I've interacted with on AI topics, basically none knew about Yudkowsky, most of the rest thought his ideas were hilarious, exactly one took him seriously, and a tiny fraction thought he was wrong but that we shouldn't mock him. He doesn't enter the discourse here hardly ever, so that tells you something about how effective his "networking" has been.

In stack ranking terms my vague guess would be that Yudkowsky ranks above "faith healers", but below "climate change deniers" in terms of Engineering-wide Googler belief (I don't know about marketing / sales / whatever), and below all those in terms of Eng-wide Googler awareness / engagement.

But again, I don't speak for Google, just my own impressions of the culture here.

Okay, Google bit over. From here on I'm much less an authority, so comment away; I'll come back with opinions later.

SolTerrasa fucked around with this message at 03:20 on Oct 4, 2014

SolTerrasa
Sep 2, 2011

e: put it all in the first post.

SolTerrasa fucked around with this message at 03:14 on Oct 4, 2014

Djeser
Mar 22, 2013


it's crow time again

Rigged Death Trap posted:

Let's go a step towards what they try to do.
How many have proper philosophical chops?

The LW line is that philosophy is worthless and studying it is only useful because of 'status' purposes. It was explicitly stated in the My Little Pony fanfiction I read.

The hyperrational pony AI posted:

You’re not failing because you’re stupid: the material that society thinks you should learn is arbitrary, boring, and is taught primarily for status reasons.

You're not a slacker, you're GIFTED posted:

You’re moderately smart--no matter what your grades say. I’ve watched you read statistics papers for fun while procrastinating on your studying. I’ve watched you read all sorts of advanced papers from various science journals instead of your assigned readings. And you’re right to do so; your philosophy classes really are a waste of time.

Djeser fucked around with this message at 03:11 on Oct 4, 2014

SolTerrasa
Sep 2, 2011

Djeser posted:

The LW line is that philosophy is worthless and studying it is only useful because of 'status' purposes. It was explicitly stated in the My Little Pony fanfiction I read.

That's an overstatement; the MLP:FiM contingent of LW is substantial (alarmingly substantial), but nothing like a majority. And even of those people, not everyone believes that philosophy in the abstract is bullshit.

Yudkowsky seems to do his usual thing, where he claims that an entire field is bullshit except for the one interpretation he happens to agree with. http://wiki.lesswrong.com/wiki/Reductionism_(sequence)

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I'm unlikely to be able to respond to it, though, so it won't help for your Tumblr war (please don't have a Tumblr war, and if you do, please don't involve me).

I was just curious what he could possibly mean. Like, it seems like MIRI/SIAI has only had a handful of researchers ever, so how many could really have moved on to google?

I don't plant to have a tumblr war. I was just curious on your take. I also talked to another friend of mine who works at google in Boston.

sat on my keys!
Oct 2, 2014

su3su2u1 posted:

I was just curious what he could possibly mean. Like, it seems like MIRI/SIAI has only had a handful of researchers ever, so how many could really have moved on to google?

I don't plant to have a tumblr war. I was just curious on your take. I also talked to another friend of mine who works at google in Boston.

I also know 4-5 people who work at Google if you need more anecdata.

potatocubed
Jul 26, 2012

*rathian noises*

SolTerrasa posted:

That's an overstatement; the MLP:FiM contingent of LW is substantial (alarmingly substantial), but nothing like a majority. And even of those people, not everyone believes that philosophy in the abstract is bullshit.

Plus Nick Bostrom, who spends a lot of his time citing Yudkowsky in his work and inviting him to contribute to books he's editing, is a professor of philosophy. Like, that's his actual job title. It would seem a bit churlish for Yud to say 'thanks for the cites, which I'll use to look like a proper academic, but your entire field is bullshit'.

Phobophilia
Apr 26, 2008

by Hand Knit
I think there's only a small subset of STEM uber alles folks who take that hard line. Everyone else just nods and leaves them be to gently caress their pony plushies, lest they attract their attention.

Strategic Tea
Sep 1, 2012

From a purely outside perspective, an awful lot of Slatestarcodex's points seem like connected people talking to each other occasionally, setting up ambiguous gravy train groups, and getting reviews of their non-academic work in papers.

Epitope
Nov 27, 2006

Grimey Drawer

Strategic Tea posted:

From a purely outside perspective, an awful lot of Slatestarcodex's points seem like connected people talking to each other occasionally, setting up ambiguous gravy train groups, and getting reviews of their non-academic work in papers.

Agreed. If that post's aim was to tout MIRI's accomplishments, it's pretty unimpressive. We're interested in AI and some of us manage to get AI related jobs! We're worried about a possibly-dark-future-we-came-up-with-and-totally-isn't-from-Terminator/The Matrix, and talked to famous people about it!

potatocubed
Jul 26, 2012

*rathian noises*

Strategic Tea posted:

From a purely outside perspective, an awful lot of Slatestarcodex's points seem like connected people talking to each other occasionally, setting up ambiguous gravy train groups, and getting reviews of their non-academic work in papers.

Pretty much.

I mean, over here you have a book chapter that Bostrom and Yudkowksy wrote, which cites Yudkowksy a few times - except those citations point to a chapter by Yudkowksy (cited to look like two chapters, naughty naughty) in a book that Bostrom edited.

Dig deeper - Yudkowsky also likes to cite the Journal of Evolution and Technology, an online academic journal operated by the Institute for Ethics and Emerging Technologies, an institute founded by... Nick Bostrom. (The journal was previously operated by the World Transhumanist Association.)

Oh, and not only has Nick Bostrom been the Editor-in-Chief for JET in the past, so has Robin Hanson.

I mean, this kind of situation isn't exactly uncommon in academia - when your exact field has only three people studying it in the world, you tend to end up citing each other in circles - but it's still a bit iffy. If you look at the names on the MIRI staff page and look at the citation list for any given MIRI paper there's an awful lot of overlap; you could argue that this is a natural result of no one else studying the field, so they have to cite one another... or you could argue that they're off in a corner doing nothing worthwhile. The greater scientific community doesn't give a poo poo.

Anyway:

quote:

1. A whole bunch of very important thought leaders ... have publicly stated they think superintelligent AI is a major risk.

Disingenuous. For example, Bill Gates speaking out on global AI risk? This article here. The exact quote is "There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list."

That's it.

Stephen Hawking actually did worry about angry AI in this opinion piece in the Independent.

Elon Musk? He tweeted "Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes". So we're back to Nick Bostrom again.

quote:

2. Various published papers, conference presentations, and chapters in textbooks on both social implications of AI and mathematical problems relating to AI self-improvement and decision theory. Some of this work has been receiving positive attention in the wider mathematical logic community - see for example here

The link in 'here' goes to this blog post from March 2013. So for "some of this work has been receiving positive attention in the wider mathematical logic community" read "a single, unfinished paper from 18 months ago got a blog mention and then sank without trace". (It's been cited once, here, where the author spends a sentence pointing out that Yud & co's thesis is rubbish but the problem that sinks it doesn't apply to her thesis.)

quote:

4. A suspicious number of MIRI members have gone on to work on/help lead various AI-related projects at Google.

A suspicious number of MIRI members have achieved absolutely nothing. Who's to say which half-assed generalisation is more telling?

quote:

5. Superintelligence by Bostrom was an NYT bestseller reviewed in the Guardian, the Telegraph, the Economist, Salon, and the Financial Times.

So was the Da Vinci Code.

potatocubed fucked around with this message at 21:04 on Oct 4, 2014

SolTerrasa
Sep 2, 2011

potatocubed posted:

Stephen Hawking actually did worry about angry AI in this opinion piece in the Independent.

I just read that. I am shocked that anyone would namedrop Hawking, like he matters to AI, when Stuart Russell was a coauthor on that piece. Russell knows what he's about, he literally wrote The Book on AI. I wonder how much of that he believes. I wonder if he's a MIRI supporter. The most recent version of the book does have a sentence about Yudkowsky in it.

I would give this whole thing a serious re-think if I discovered that he believed in it; I have respect for him.

E: here's Yud et al crowing about their mention. http://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/

Huh. I'm going to have to think about that.

SolTerrasa fucked around with this message at 21:45 on Oct 4, 2014

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

SolTerrasa posted:

For this to make sense you have to understand a couple of things about Google. Number one is that we don't get orders from on high very often. We might get a goal, like "increase the quality of search results by such-and-such an amount" from on high, and then that will filter down through the levels and be increasingly refined by people who are closer and closer to the problem. I am seven levels removed from Larry, but only three levels worth of goal-refinement. The goals for people at the very leaf nodes of the org tree are still often freakishly abstract, and you can use pretty much any kind of solution you want.

Wow. That's a pretty fascinating org structure, and clearly it works, but how the heck do you keep from stepping on each other's toes? It sounds like there'd be a constant problem of people breaking each other's solutions by pursuing their own freakishly abstract goals.

SolTerrasa
Sep 2, 2011

Ratoslov posted:

Wow. That's a pretty fascinating org structure, and clearly it works, but how the heck do you keep from stepping on each other's toes? It sounds like there'd be a constant problem of people breaking each other's solutions by pursuing their own freakishly abstract goals.

Oh, man, you are so right you don't even know. We like to say "there are two ways to do anything at Google, the way that's deprecated and the way that's not ready yet." Or "any code you could possibly want to write is already checked in, but it probably doesn't work anymore".

Obviously Google does work, though, so there must be an explanation for that. I have some guesses. We have the advantage that our hiring selects for really, really adaptable people. We'll pick a good generalist over a great specialist every time. We have really good PMs, and really really good managers, so you usually hear if things will change out from under you. But yeah, it's a constant challenge to keep everything reasonably coherent given that there's a single monolithic repository (http://www.infoq.com/presentations/Development-at-Google) and everyone builds from the most recent changes. Our build infrastructure is a work of genius.

Oh, that's the other thing. It's ALL a work of genius. Every piece of infrastructure at Google was built by really smart people, for really smart people to use. This is great, and also terrible; it means that things that should be impossible are ridiculously easy. But it also means that you have to know everything about everything to use anything. You wouldn't expect to have to be very smart in order to, for instance, serialize some arbitrary struct to persistent storage. But you do. You get all the advantages of the geniuses who built protobuf (https://developers.google.com/protocol-buffers/) and bigtable (http://en.m.wikipedia.org/wiki/BigTable) and colossus/gfs (http://en.m.wikipedia.org/wiki/Google_File_System) and a couple other things that are still secret. That struct will be so goddamn well-serialized that it will be recoverable a thousand years hence, when we've long since moved on from using bits and bytes, after a nuclear attack, asteroid strike, and xenonauts-style alien invasion simultaneously. But you have to hold the whole system architecture (and all the changes and upcoming changes) in your head the whole time you're working. It's hard.

Tunicate
May 15, 2012

Ratoslov posted:

Wow. That's a pretty fascinating org structure, and clearly it works, but how the heck do you keep from stepping on each other's toes? It sounds like there'd be a constant problem of people breaking each other's solutions by pursuing their own freakishly abstract goals.

have you seen Valve's structure.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

SolTerrasa posted:

But you have to hold the whole system architecture (and all the changes and upcoming changes) in your head the whole time you're working. It's hard.

Ugh. Honestly sounds like a nightmare to me, but I can see how it works. Thank you.

Tunicate posted:

have you seen Valve's structure.

I've read their 'employee handbook', yeah. I'm not entirely sure how they manage the task of getting the rent paid for their office-space, let alone conquer the digital game-distribution world with that kind of org structure.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Ratoslov posted:

Ugh. Honestly sounds like a nightmare to me, but I can see how it works. Thank you.


I've read their 'employee handbook', yeah. I'm not entirely sure how they manage the task of getting the rent paid for their office-space, let alone conquer the digital game-distribution world with that kind of org structure.
Their empire has been built on two great discoveries:

1. People will use a DRM system if it is fun and has regular sales (and they get to collect a cut)
2. Hats

Chamale
Jul 11, 2010

I'm helping!



SolTerrasa posted:

I just read that. I am shocked that anyone would namedrop Hawking, like he matters to AI, when Stuart Russell was a coauthor on that piece. Russell knows what he's about, he literally wrote The Book on AI. I wonder how much of that he believes. I wonder if he's a MIRI supporter. The most recent version of the book does have a sentence about Yudkowsky in it.

It's like when Szilárd and Einstein proposed the atomic bomb. It was Szilárd's idea, but he asked Einstein to sign his name on it because he was the most famous physicist in the world, just like how Hawking is the most famous physicist now.

Toph Bei Fong
Feb 29, 2008



I'm speaking here as a philosophy guy, so if this is an incorrect way of thinking about the problem, please would someone more knowledgeable about AI and/or math correct me.

How does Yudkowsky, et al,'s timeless decision theory workout against Bremermann's Limit? (For those who don't know: this is the maximum value that a theoretical supercomputer which uses the entire mass of the Earth, grinding away for the entirety of history, can calculate; essentially the maximum number any computer could ever use) Surely an AI running billions and billions and billions of perfect simultaneous simulations of every possible probability for every possible person would require more than 2.56 x 2092 digits of processing. Not to mention what such a thing would even use for fuel or as heat sinks... Even a hypothetical self-improving AI can't break the laws of physics.

I mean, TDT is already basically just a crappy recitation of Pascal's wager, but I'm curious if the more practical issues with it have been discussed.

Tunicate
May 15, 2012

Well you see it might be possible for physics to be wrong and that's what Yud is betting on.

Silver2195
Apr 4, 2012

Spoilers Below posted:

I'm speaking here as a philosophy guy, so if this is an incorrect way of thinking about the problem, please would someone more knowledgeable about AI and/or math correct me.

How does Yudkowsky, et al,'s timeless decision theory workout against Bremermann's Limit? (For those who don't know: this is the maximum value that a theoretical supercomputer which uses the entire mass of the Earth, grinding away for the entirety of history, can calculate; essentially the maximum number any computer could ever use) Surely an AI running billions and billions and billions of perfect simultaneous simulations of every possible probability for every possible person would require more than 2.56 x 2092 digits of processing. Not to mention what such a thing would even use for fuel or as heat sinks... Even a hypothetical self-improving AI can't break the laws of physics.

I mean, TDT is already basically just a crappy recitation of Pascal's wager, but I'm curious if the more practical issues with it have been discussed.

I'm not an expert on these things either, but I think your general point is a good one. It seems that the dumb Pascal's Wager type scenarios involving TDT that people have come up with all involve effectively infinite simulations, while the thought experiments where TDT actually makes sense (like Newcombe's Problem) don't involve that kind of scale.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Tunicate posted:

Well you see it might be possible for physics to be wrong and that's what Yud is betting on.

I went back to the article where Yud says this, and found a new type of crazy in the first comment:

Johnicolas posted:

You reference a popular idea, something like "The integers are countable, but the real number line is uncountable." I apologize for nitpicking, but I want to argue against philosophers (that's you, Eliezer) blindly repeating this claim, as if it was obvious or uncontroversial.

Uh, it is, at least to the extent that anything involving infinities can be said to be "obvious".

For all Yudkowsky's blind appropriation of misunderstood concepts from mathematics, "R is uncountable" is true.

Johnicolas posted:

Yes, it is strictly correct according to current definitions. However, there was a time when people were striving to find the "correct" definition of the real number line. What people ended up with was not the only possibility, and Dedekind cuts (or various other things) are a pretty ugly, arbitrary construction.

Dedekind cuts are rather ugly and unintuitive if you haven't seen them before. But they're hardly the only way to construct the reals from the rationals. For example, Cauchy completeness is a very natural one, is not at all ugly, is much more commonly used, generalizes easily to other spaces, and gives the same result. When you get several completely different constructions all producing the same result, the result isn't arbitrary.

Johnicolas posted:

The set containing EVERY number that you might, even in principle, name or pick out with a definition is countable (because the set of names, or definitions, is a subset of the set of strings, which is countable).

In a follow-up comment, he explains further that we need to stop using the real numbers and start using the "nameable numbers" instead. It's not clear how you would rigorously define such a set or why it would be desirable to do so; this is like a set-theoretic version of Yud's "0 and 1 aren't probabilities" thing.

In fact, with even a small amount of mathematical background. it is quite obvious that such a set cannot be well-defined. If S is the set of "nameable numbers" and isn't all of R, then either Cantor's Diagonal Argument or the Well-Ordering Theorem can be used to deterministically name a number not in S.

Johnicolas posted:

The Lowenheim-Skolem theorem says (loosely interpreted) that even if you CLAIM to be talking about uncountably infinite things, there's a perfectly self-consistent interpretation of your talk that refers only to finite things (e.g. your definitions and proofs themselves).

Here's a handy tip for armchair well-I-know-I'm-quite-clever people: if you ever find yourself "loosely interpreting" a mathematical theorem, stop. Anything you write based on your loose interpretation will almost surely be as accurate as a portrayal of quantum physics in a sci-fi b-movie.

For one thing, Lowenheim-Skolem only applies in first-order logic, and its statement is provably false if extended to other systems. We are perfectly capable of using languages outside of first-order logic - hell, you can't even talk about the notions of "infinite" or "countable" in first-order logic - so restrictions on what can be said in first-order logic don't actually restrict what we ourselves can say.

For another thing, Lowenheim-Skolem does not say that "there's a perfectly self-consistent interpretation of your talk that refers only to finite things". What it says is that every theory with an infinite model has a model of every infinite cardinality - that is, for any set of rules you write down that hold true in some set of infinite size, there is a set of every infinite size in which they hold true too. Just about the only thing it doesn't say is that there's a finite model. Did you even read the theorem?

On a more technical note, the proof of Lowenhein-Skolem requires the axiom of choice, which states that given any (possibly infinite) collection of nonempty sets, it is possible to choose one element from each set. It seems obviously true and is almost universally accepted in mathematics, but it has a couple of unusual consequences (the well-ordering theorem and the Banach-Tarski theorem) that feel odd and raise eyebrows. Johnicolas's line of argument suggests that he disagrees with the axiom of choice - after all, if you can only write down finitely many strings, you can't make uncountably many distinct choices as required. But if he rejects the axiom of choice, he can't use Lowenheim-Skolem.

He later clarifies what he means:

Johnicolas posted:

Mathematicians routinely use "infinite" to mean "infinite in magnitude". For example, the concept "The natural numbers" is infinite in magnitude, but I have picked it out using only 19 ascii characters. From a computer science perspective, it is a finite concept - finite in information content, the number of bits necessary to point it out.

Each of the objects in the set of the Peano integers is finite. The set of Peano integers, considered as a whole, is infinite in magnitude, but finite in information content.

First of all, what the hell does this have to do with Lowenheim-Skolem? You're not talking about the size of the model anymore, you're talking about the size of the theory. "Anything that can be expressed as a finite string can be expressed as a finite string" isn't a theorem or an insight, it's a tautology.

Second of all, the real numbers are "finite" under this definition anyhow, so how does any of this connect in any way to your original point?

Johnicolas posted:

You don't get magical powers of infinity just from claiming to have them. Standard mathematical talk is REALLY WEIRD from a computer science perspective.

A mathematician picks up a piece of chalk. Upon the blackboard he writes: "2+2=4".

Suddenly, the door bursts open and a computer scientist charges in. "LIES!" he shouts. "You write about twos and fours, but you don't have two of anything! You don't have two of anything! You don't have four of anything!" He pulls out a monogrammed handkerchief and wipes the sweat from both his chins. "You do not magically conjure matter merely by writing letters on a page! They are nothing more than symbols! You have two of nothing, you have four of nothing, you have nothing! No matter how many symbols you invent, in the end you will have nothing!"

He begins panting, and tosses his trenchcoat to the floor to cool down. "If writing a number on the board could physically summon it, you would be making something from nothing! You'd violate Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?" He collapses.

The cause of death is determined to be a stroke.

Johnicolas posted:

I have a knee-jerk response, railing against uncountable sets in general and the real numbers in particular; it's not pretty and I know how to control it better now.

Lottery of Babylon fucked around with this message at 17:49 on Oct 5, 2014

Telarra
Oct 9, 2012

Honestly, the whole deal with "Friendly AI" and MIRI's work on it just comes across to me like a bunch of iron-age blacksmiths desperately trying to figure out what safety measures you'd need in a modern steel foundry, having no idea how such a thing would function, or even how to make steel. Empty guesses and wasted effort, rationalized only by the perceived danger.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Lottery of Babylon posted:

A mathematician picks up a piece of chalk. Upon the blackboard he writes: "2+2=4".

Suddenly, the door bursts open and a computer scientist charges in. "LIES!" he shouts. "You write about twos and fours, but you don't have two of anything! You don't have two of anything! You don't have four of anything!" He pulls out a monogrammed handkerchief and wipes the sweat from both his chins. "You do not magically conjure matter merely by writing letters on a page! They are nothing more than symbols! You have two of nothing, you have four of nothing, you have nothing! No matter how many symbols you invent, in the end you will have nothing!"

He begins panting, and tosses his trenchcoat to the floor to cool down. "If writing a number on the board could physically summon it, you would be making something from nothing! You'd violate Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?" He collapses.

The cause of death is determined to be a stroke.

I know emptyquoting is discouraged, but god drat that is funny.

Also well done explaining how wrong that guy is, sometimes the wall of verbiage can get a bit dense.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I feel insulted. Yudkowsky is not a computer scientist. He isn't a scientist of anything, he doesn't even have a degree.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

Cardiovorax posted:

I feel insulted. Yudkowsky is not a computer scientist. He isn't a scientist of anything, he doesn't even have a degree.

He's more of a computer sophist, really.

Djeser
Mar 22, 2013


it's crow time again

Let's (Less?) Read: An Alien God

This one is about evolution. Yud starts out by talking about the apparent purpose/design in living things.

quote:

The Goddists said "God did it", because you get 50 bonus points each time you use the word "God" in a sentence.

I don't think that's why people argued for a divine creator, but I don't feel like reading three huge articles. Let's just stick to this one. Okay, people were dumb because of religion. Next?

quote:

Foxes seem well-designed to catch rabbits. Rabbits seem well-designed to evade foxes. Was the Creator having trouble making up Its mind? [...] The ecosystem would make much more sense if it wasn't designed by a unitary Who, but, rather, created by a horde of deities—say from the Hindu or Shinto religions. [...] I wonder if anyone ever remarked on the seemingly excellent evidence thus provided for Hinduism over Christianity. Probably not.

"Has anyone in the thousands of years of theological debate brought up this fairly basic fact? I didn't think so :smug:"

He goes on to point out that X-Men is unrealistic, because real mutations don't have a 'purpose'. He quotes someone who says what he's saying with way fewer words:

quote:

"Many non-biologists," observed George Williams, "think that it is for their benefit that rattles grow on rattlesnake tails."

Again using many more words than necessary, he explains that evolution only works through a bunch of small mutations, and it only works when each mutation improves reproductive fitness. All right, this is evo bio 101, but I liked evo bio 101.

quote:

Why is Nature cruel? You, a human, can look at an Ichneumon wasp, and decide that it's cruel to eat your prey alive. You can decide that if you're going to eat your prey alive, you can at least have the decency to stop it from hurting. It would scarcely cost the wasp anything to anesthetize its prey as well as paralyze it.

He also cites that elephants have a finite number of teeth, so old elephants die of starvation after they lose their last set of teeth.

quote:

If you were talking to a fellow human, trying to resolve a conflict of interest, you would be in a good negotiating position—would have an easy job of persuasion. It would cost so little to anesthetize the prey, to let the elephant die without agony! Oh please, won't you do it, kindly... um...

There's no one to argue with.

He then rails against 'amateur evolutionary biologists' who are 'making predictions' about how humans could design animals more ideally, and his example for this seems to be the old idea that a modern human designing taste buds would make whatever nutrients our bodies need most be the tastiest. Basically, he uses more words to talk about how evolution doesn't have an end goal.

quote:

Find a watch in a desert, said William Paley, and you can infer the watchmaker. There were once those who denied this, who thought that life "just happened" without need of an optimization process, mice being spontaneously generated from straw and dirty shirts.

If we ask who was more correct—the theologians who argued for a Creator-God, or the intellectually unfulfilled atheists who argued that mice spontaneously generated—then the theologians must be declared the victors: evolution is not God, but it is closer to God than it is to pure random entropy.

This is the weirdest argument I've seen in a while. "People who believed in intelligent design are less wrong (:haw:) than people who believed in spontaneous generation." He says that this is because evolution, as a process guided by natural selection, is non-random, so that's closer to a god than to randomness.

quote:

In a lot of ways, evolution is like unto theology. "Gods are ontologically distinct from creatures," said Damien Broderick, "or they're not worth the paper they're written on." And indeed, the Shaper of Life is not itself a creature. Evolution is bodiless, like the Judeo-Christian deity. Omnipresent in Nature, immanent in the fall of every leaf. Vast as a planet's surface. Billions of years old. Itself unmade, arising naturally from the structure of physics. Doesn't that all sound like something that might have been said about God?

Makes sense that theories are like gods in Yud's mind.

quote:

Evolution is not a God, but if it were, it wouldn't be Jehovah. It would be H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything, surrounded by the thin monotonous piping of flutes.

Oh wait, Lovecraft? This should go in the TV Tropes thread then.

quote:

So much for the claim some religionists make, that they are waiting innocently curious for Science to discover God. Science has already discovered the sort-of-godlike maker of humans—but it wasn't what the religionists wanted to hear. They were waiting for the discovery of their God, the highly specific God they want to be there. They shall wait forever, for the great discovery has already taken place, and the winner is Azathoth.

In essence, what he's trying to say is that evolution is a non-random process that doesn't take human morality into account, and he's trying to draw parallels between that and concepts of gods (with the underlying message that an AI might act similarly to evolutionary processes). Getting his point across though, he trips over his smug superiority and falls face-first into a dump truck full of useless words.

Verdict: Not technically wrong, but annoying anyway.

The Vosgian Beast
Aug 13, 2011

Business is slow
Less Wrongites accepting that "natural" does not equal "good" is something I agree with in their ideology.

Or it was part of their ideology, until the Dark Enlightenment nerds created Gnon, which is their word for "divine law" except totes scientific and Less Wrong took it seriously because the idea that fascists who go on about Machiavellian politics and Nietzschean Will To Power could be disingenuous is beyond them.

The Dark Enlightenment ruins a lot.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



The dark enlightenment is that whole thing which is basically, "Well, I grew up around shitheads, and now I am well educated; I will use my cleverness, and concepts from that education, to justify the poo poo-headery of my birth, more or less point for point. Except maybe some tiny bits, like God or marijuana." Right? Or is it just the racism?

Adbot
ADBOT LOVES YOU

sat on my keys!
Oct 2, 2014

Nessus posted:

The dark enlightenment is that whole thing which is basically, "Well, I grew up around shitheads, and now I am well educated; I will use my cleverness, and concepts from that education, to justify the poo poo-headery of my birth, more or less point for point. Except maybe some tiny bits, like God or marijuana." Right? Or is it just the racism?

Dark Enlightenment is if you took every terrible misuse of statistics in The Bell Curve (HBD, "scientific" racism), mixed in some Red Pill and general manosphere bullshit (some DE people are against PUA stuff because someone might use it on their white women oh noes), and combined it with the general nerd tendency of seeing that the world is messed up, and deciding that it's because all those stupid humanities people or - gasp - people who didn't even go to college!!! - are allowed to have a voice in how their country is run.

  • Locked thread