|
So the Slatestarcodex guy responded to something I wrote on tumblr with this http://slatestarscratchpad.tumblr.com/post/99067848611/su3su2u1-based-on-their-output-over-the-last quote:Possibly you’re more active in the fanfic-reading community than in the Lob’s Theorem-circumventing community? Anyone want to weigh in on a response? In particular, SolTerrasa how do you feel about the "suspicious number" of former MIRI employees "leading AI projects" at google?
|
# ? Oct 3, 2014 21:11 |
|
|
# ? May 4, 2024 11:33 |
|
I'm just kind of annoyed LW has to pretend Peter Thiel isn't a terrible, terrible, terrible human being because he gives them money.
|
# ? Oct 3, 2014 21:39 |
|
As an aside, "thought leaders" is the most disgusting term to come out of the last decade.
|
# ? Oct 3, 2014 21:57 |
|
su3su2u1 posted:So the Slatestarcodex guy responded to something I wrote on tumblr with this I definitely do, but I'm kinda busy working on AI projects at Google at the moment. Maybe tonight?
|
# ? Oct 3, 2014 22:03 |
|
How many of these supposed "thought leaders" have published any academic work in CS/pure math/C&O, let alone AI? I really don't get how Stephen Hawking's opinion on "Not-QFT-or-GR subject X" is supposed to be more valuable to me than any other person's with a PhD? Also the fact that MIRI people almost exclusively publish in vanity journals is kind of suggestive in a bad way.
|
# ? Oct 4, 2014 00:05 |
|
bartlebyshop posted:How many of these supposed "thought leaders" have published any academic work in CS/pure math/C&O, let alone AI? I really don't get how Stephen Hawking's opinion on "Not-QFT-or-GR subject X" is supposed to be more valuable to me than any other person's with a PhD? Let's go a step towards what they try to do. How many have proper philosophical chops?
|
# ? Oct 4, 2014 00:22 |
|
Rigged Death Trap posted:Let's go a step towards what they try to do. Being a philosophically ignorant rube myself, I felt this claim would be harder for me to evaluate. Crazy bullshit gets published in PRB all the time, and my assumption is similar things can happen in philosophy. But I trust my CS/physics bullshit sensors enough to be able to say that this list of "thought leaders" is about as compelling as if someone said "well, CLRS think my institute devoted to showing that high Tc cuprates are actually s-wave superconductors is awesomesauce, give me cashmoney" (to use an example relevant to me).
|
# ? Oct 4, 2014 00:28 |
|
SolTerrasa posted:I definitely do, but I'm kinda busy working on AI projects at Google at the moment. Maybe tonight? I'll hold off replying until you have a chance to weigh in.
|
# ? Oct 4, 2014 00:53 |
|
Way to disappear Ray Kurzweil, Slate Star Codex dude.
|
# ? Oct 4, 2014 00:58 |
|
Rigged Death Trap posted:Let's go a step towards what they try to do. I think it's safe to assume that to them philosophy as academic field is nothing but a bunch of people with varying degrees of intellegence churning out whatever inane bullshit they can to scam money from their followers and inflate their own ego.
|
# ? Oct 4, 2014 01:05 |
|
su3su2u1 posted:I'll hold off replying until you have a chance to weigh in. This post has been edited substantially from the original version; it's really hard to talk about Google, which is part of the problem. Thanks! It might not be as enlightening as you hope, but here goes. I'm going to start by talking about the Google bits. Standard disclaimers: I speak for myself, not Google, I'm just some dude who works there and I'm not PR-approved. For this to make sense you have to understand a couple of things about Google. Number one is that we don't get orders from on high very often. We might get a goal, like "increase the quality of search results by such-and-such an amount" from on high, and then that will filter down through the levels and be increasingly refined by people who are closer and closer to the problem. I am seven levels removed from Larry, but only three levels worth of goal-refinement. The goals for people at the very leaf nodes of the org tree are still often freakishly abstract, and you can use pretty much any kind of solution you want. I've got a coworker who is really infatuated with ordering theory. It turns out that most of our problems can be solved with partial orders. You have to wrap them in a hundred other things before they fix all your problems, but they're the core math, the core solution. I personally really like functional transformations, and so I solve most problems by writing essentially pure mapping functions and running them in massive parallel over bucketloads of data. Last quarter, I decided I really liked AI and ML, so I solved a bunch of problems with AI and ML. That's what I mean when I say I'm working on AI projects at Google; it's not like I'm trying to build a sentient lifeform or whatever. So he might be telling the truth, but it's not likely that it means anything. So let's take "leading AI projects at Google" to mean "works at Google, likes AI". The latter we can take as a given for anyone who gave up years of their life to work at MIRI. The former is harder to talk about here. Basically everything at Google is confidential in one way or another, so I probably can't even confirm what people are working on, or what our AI projects are, or whatever. This is why the MIRI people think we have all these AI projects; because they would if they had our nigh-unlimited resources, and no one will ever deny it, so therefore these projects exist. I guess I need something much more specific from him before I can actually make a real comment. I don't really understand what he's implying. Is he implying that Google is poaching Friendly AI researchers for our own FAI project? Is he implying that the people who work on FAI are mostly smart enough to work at Google? I have very different things to say about those two things. Needless to say, we're probably not poaching people for Friendly AI. If we were I think I'd have noticed. Though it is loving hilarious that he noticed "people are leaving MIRI for Google" and didn't connect it to "because MIRI is a poo poo organization to work for if you're brilliant, and Google isn't." Instead in his head it's because they've been poached to work on Google's own friendly AI. Things I would like include names of former MIRI members who work at Google now (since I couldn't find a historical list) and a list of things they think Google is working on so I can laugh at it. I'm unlikely to be able to respond to it, though, so it won't help for your Tumblr war (please don't have a Tumblr war, and if you do, please don't involve me). Also, I want to point out that of the many people at Google I've interacted with on AI topics, basically none knew about Yudkowsky, most of the rest thought his ideas were hilarious, exactly one took him seriously, and a tiny fraction thought he was wrong but that we shouldn't mock him. He doesn't enter the discourse here hardly ever, so that tells you something about how effective his "networking" has been. In stack ranking terms my vague guess would be that Yudkowsky ranks above "faith healers", but below "climate change deniers" in terms of Engineering-wide Googler belief (I don't know about marketing / sales / whatever), and below all those in terms of Eng-wide Googler awareness / engagement. But again, I don't speak for Google, just my own impressions of the culture here. Okay, Google bit over. From here on I'm much less an authority, so comment away; I'll come back with opinions later. SolTerrasa fucked around with this message at 03:20 on Oct 4, 2014 |
# ? Oct 4, 2014 01:28 |
|
e: put it all in the first post.
SolTerrasa fucked around with this message at 03:14 on Oct 4, 2014 |
# ? Oct 4, 2014 01:45 |
|
Rigged Death Trap posted:Let's go a step towards what they try to do. The LW line is that philosophy is worthless and studying it is only useful because of 'status' purposes. It was explicitly stated in the My Little Pony fanfiction I read. The hyperrational pony AI posted:You’re not failing because you’re stupid: the material that society thinks you should learn is arbitrary, boring, and is taught primarily for status reasons. You're not a slacker, you're GIFTED posted:You’re moderately smart--no matter what your grades say. I’ve watched you read statistics papers for fun while procrastinating on your studying. I’ve watched you read all sorts of advanced papers from various science journals instead of your assigned readings. And you’re right to do so; your philosophy classes really are a waste of time. Djeser fucked around with this message at 03:11 on Oct 4, 2014 |
# ? Oct 4, 2014 03:00 |
|
Djeser posted:The LW line is that philosophy is worthless and studying it is only useful because of 'status' purposes. It was explicitly stated in the My Little Pony fanfiction I read. That's an overstatement; the MLP:FiM contingent of LW is substantial (alarmingly substantial), but nothing like a majority. And even of those people, not everyone believes that philosophy in the abstract is bullshit. Yudkowsky seems to do his usual thing, where he claims that an entire field is bullshit except for the one interpretation he happens to agree with. http://wiki.lesswrong.com/wiki/Reductionism_(sequence)
|
# ? Oct 4, 2014 03:23 |
|
SolTerrasa posted:I'm unlikely to be able to respond to it, though, so it won't help for your Tumblr war (please don't have a Tumblr war, and if you do, please don't involve me). I was just curious what he could possibly mean. Like, it seems like MIRI/SIAI has only had a handful of researchers ever, so how many could really have moved on to google? I don't plant to have a tumblr war. I was just curious on your take. I also talked to another friend of mine who works at google in Boston.
|
# ? Oct 4, 2014 03:57 |
|
su3su2u1 posted:I was just curious what he could possibly mean. Like, it seems like MIRI/SIAI has only had a handful of researchers ever, so how many could really have moved on to google? I also know 4-5 people who work at Google if you need more anecdata.
|
# ? Oct 4, 2014 04:16 |
|
SolTerrasa posted:That's an overstatement; the MLP:FiM contingent of LW is substantial (alarmingly substantial), but nothing like a majority. And even of those people, not everyone believes that philosophy in the abstract is bullshit. Plus Nick Bostrom, who spends a lot of his time citing Yudkowsky in his work and inviting him to contribute to books he's editing, is a professor of philosophy. Like, that's his actual job title. It would seem a bit churlish for Yud to say 'thanks for the cites, which I'll use to look like a proper academic, but your entire field is bullshit'.
|
# ? Oct 4, 2014 09:25 |
|
I think there's only a small subset of STEM uber alles folks who take that hard line. Everyone else just nods and leaves them be to gently caress their pony plushies, lest they attract their attention.
|
# ? Oct 4, 2014 10:01 |
|
From a purely outside perspective, an awful lot of Slatestarcodex's points seem like connected people talking to each other occasionally, setting up ambiguous gravy train groups, and getting reviews of their non-academic work in papers.
|
# ? Oct 4, 2014 18:29 |
|
Strategic Tea posted:From a purely outside perspective, an awful lot of Slatestarcodex's points seem like connected people talking to each other occasionally, setting up ambiguous gravy train groups, and getting reviews of their non-academic work in papers. Agreed. If that post's aim was to tout MIRI's accomplishments, it's pretty unimpressive. We're interested in AI and some of us manage to get AI related jobs! We're worried about a possibly-dark-future-we-came-up-with-and-totally-isn't-from-Terminator/The Matrix, and talked to famous people about it!
|
# ? Oct 4, 2014 19:16 |
|
Strategic Tea posted:From a purely outside perspective, an awful lot of Slatestarcodex's points seem like connected people talking to each other occasionally, setting up ambiguous gravy train groups, and getting reviews of their non-academic work in papers. Pretty much. I mean, over here you have a book chapter that Bostrom and Yudkowksy wrote, which cites Yudkowksy a few times - except those citations point to a chapter by Yudkowksy (cited to look like two chapters, naughty naughty) in a book that Bostrom edited. Dig deeper - Yudkowsky also likes to cite the Journal of Evolution and Technology, an online academic journal operated by the Institute for Ethics and Emerging Technologies, an institute founded by... Nick Bostrom. (The journal was previously operated by the World Transhumanist Association.) Oh, and not only has Nick Bostrom been the Editor-in-Chief for JET in the past, so has Robin Hanson. I mean, this kind of situation isn't exactly uncommon in academia - when your exact field has only three people studying it in the world, you tend to end up citing each other in circles - but it's still a bit iffy. If you look at the names on the MIRI staff page and look at the citation list for any given MIRI paper there's an awful lot of overlap; you could argue that this is a natural result of no one else studying the field, so they have to cite one another... or you could argue that they're off in a corner doing nothing worthwhile. The greater scientific community doesn't give a poo poo. Anyway: quote:1. A whole bunch of very important thought leaders ... have publicly stated they think superintelligent AI is a major risk. Disingenuous. For example, Bill Gates speaking out on global AI risk? This article here. The exact quote is "There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list." That's it. Stephen Hawking actually did worry about angry AI in this opinion piece in the Independent. Elon Musk? He tweeted "Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes". So we're back to Nick Bostrom again. quote:2. Various published papers, conference presentations, and chapters in textbooks on both social implications of AI and mathematical problems relating to AI self-improvement and decision theory. Some of this work has been receiving positive attention in the wider mathematical logic community - see for example here The link in 'here' goes to this blog post from March 2013. So for "some of this work has been receiving positive attention in the wider mathematical logic community" read "a single, unfinished paper from 18 months ago got a blog mention and then sank without trace". (It's been cited once, here, where the author spends a sentence pointing out that Yud & co's thesis is rubbish but the problem that sinks it doesn't apply to her thesis.) quote:4. A suspicious number of MIRI members have gone on to work on/help lead various AI-related projects at Google. A suspicious number of MIRI members have achieved absolutely nothing. Who's to say which half-assed generalisation is more telling? quote:5. Superintelligence by Bostrom was an NYT bestseller reviewed in the Guardian, the Telegraph, the Economist, Salon, and the Financial Times. So was the Da Vinci Code. potatocubed fucked around with this message at 21:04 on Oct 4, 2014 |
# ? Oct 4, 2014 21:00 |
|
potatocubed posted:Stephen Hawking actually did worry about angry AI in this opinion piece in the Independent. I just read that. I am shocked that anyone would namedrop Hawking, like he matters to AI, when Stuart Russell was a coauthor on that piece. Russell knows what he's about, he literally wrote The Book on AI. I wonder how much of that he believes. I wonder if he's a MIRI supporter. The most recent version of the book does have a sentence about Yudkowsky in it. I would give this whole thing a serious re-think if I discovered that he believed in it; I have respect for him. E: here's Yud et al crowing about their mention. http://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/ Huh. I'm going to have to think about that. SolTerrasa fucked around with this message at 21:45 on Oct 4, 2014 |
# ? Oct 4, 2014 21:41 |
|
SolTerrasa posted:For this to make sense you have to understand a couple of things about Google. Number one is that we don't get orders from on high very often. We might get a goal, like "increase the quality of search results by such-and-such an amount" from on high, and then that will filter down through the levels and be increasingly refined by people who are closer and closer to the problem. I am seven levels removed from Larry, but only three levels worth of goal-refinement. The goals for people at the very leaf nodes of the org tree are still often freakishly abstract, and you can use pretty much any kind of solution you want. Wow. That's a pretty fascinating org structure, and clearly it works, but how the heck do you keep from stepping on each other's toes? It sounds like there'd be a constant problem of people breaking each other's solutions by pursuing their own freakishly abstract goals.
|
# ? Oct 4, 2014 21:43 |
|
Ratoslov posted:Wow. That's a pretty fascinating org structure, and clearly it works, but how the heck do you keep from stepping on each other's toes? It sounds like there'd be a constant problem of people breaking each other's solutions by pursuing their own freakishly abstract goals. Oh, man, you are so right you don't even know. We like to say "there are two ways to do anything at Google, the way that's deprecated and the way that's not ready yet." Or "any code you could possibly want to write is already checked in, but it probably doesn't work anymore". Obviously Google does work, though, so there must be an explanation for that. I have some guesses. We have the advantage that our hiring selects for really, really adaptable people. We'll pick a good generalist over a great specialist every time. We have really good PMs, and really really good managers, so you usually hear if things will change out from under you. But yeah, it's a constant challenge to keep everything reasonably coherent given that there's a single monolithic repository (http://www.infoq.com/presentations/Development-at-Google) and everyone builds from the most recent changes. Our build infrastructure is a work of genius. Oh, that's the other thing. It's ALL a work of genius. Every piece of infrastructure at Google was built by really smart people, for really smart people to use. This is great, and also terrible; it means that things that should be impossible are ridiculously easy. But it also means that you have to know everything about everything to use anything. You wouldn't expect to have to be very smart in order to, for instance, serialize some arbitrary struct to persistent storage. But you do. You get all the advantages of the geniuses who built protobuf (https://developers.google.com/protocol-buffers/) and bigtable (http://en.m.wikipedia.org/wiki/BigTable) and colossus/gfs (http://en.m.wikipedia.org/wiki/Google_File_System) and a couple other things that are still secret. That struct will be so goddamn well-serialized that it will be recoverable a thousand years hence, when we've long since moved on from using bits and bytes, after a nuclear attack, asteroid strike, and xenonauts-style alien invasion simultaneously. But you have to hold the whole system architecture (and all the changes and upcoming changes) in your head the whole time you're working. It's hard.
|
# ? Oct 4, 2014 22:19 |
|
Ratoslov posted:Wow. That's a pretty fascinating org structure, and clearly it works, but how the heck do you keep from stepping on each other's toes? It sounds like there'd be a constant problem of people breaking each other's solutions by pursuing their own freakishly abstract goals. have you seen Valve's structure.
|
# ? Oct 4, 2014 22:27 |
|
SolTerrasa posted:But you have to hold the whole system architecture (and all the changes and upcoming changes) in your head the whole time you're working. It's hard. Ugh. Honestly sounds like a nightmare to me, but I can see how it works. Thank you. Tunicate posted:have you seen Valve's structure. I've read their 'employee handbook', yeah. I'm not entirely sure how they manage the task of getting the rent paid for their office-space, let alone conquer the digital game-distribution world with that kind of org structure.
|
# ? Oct 5, 2014 02:11 |
Ratoslov posted:Ugh. Honestly sounds like a nightmare to me, but I can see how it works. Thank you. 1. People will use a DRM system if it is fun and has regular sales (and they get to collect a cut) 2. Hats
|
|
# ? Oct 5, 2014 02:19 |
|
SolTerrasa posted:I just read that. I am shocked that anyone would namedrop Hawking, like he matters to AI, when Stuart Russell was a coauthor on that piece. Russell knows what he's about, he literally wrote The Book on AI. I wonder how much of that he believes. I wonder if he's a MIRI supporter. The most recent version of the book does have a sentence about Yudkowsky in it. It's like when Szilárd and Einstein proposed the atomic bomb. It was Szilárd's idea, but he asked Einstein to sign his name on it because he was the most famous physicist in the world, just like how Hawking is the most famous physicist now.
|
# ? Oct 5, 2014 04:49 |
|
I'm speaking here as a philosophy guy, so if this is an incorrect way of thinking about the problem, please would someone more knowledgeable about AI and/or math correct me. How does Yudkowsky, et al,'s timeless decision theory workout against Bremermann's Limit? (For those who don't know: this is the maximum value that a theoretical supercomputer which uses the entire mass of the Earth, grinding away for the entirety of history, can calculate; essentially the maximum number any computer could ever use) Surely an AI running billions and billions and billions of perfect simultaneous simulations of every possible probability for every possible person would require more than 2.56 x 2092 digits of processing. Not to mention what such a thing would even use for fuel or as heat sinks... Even a hypothetical self-improving AI can't break the laws of physics. I mean, TDT is already basically just a crappy recitation of Pascal's wager, but I'm curious if the more practical issues with it have been discussed.
|
# ? Oct 5, 2014 07:02 |
|
Well you see it might be possible for physics to be wrong and that's what Yud is betting on.
|
# ? Oct 5, 2014 07:03 |
|
Spoilers Below posted:I'm speaking here as a philosophy guy, so if this is an incorrect way of thinking about the problem, please would someone more knowledgeable about AI and/or math correct me. I'm not an expert on these things either, but I think your general point is a good one. It seems that the dumb Pascal's Wager type scenarios involving TDT that people have come up with all involve effectively infinite simulations, while the thought experiments where TDT actually makes sense (like Newcombe's Problem) don't involve that kind of scale.
|
# ? Oct 5, 2014 07:40 |
|
Tunicate posted:Well you see it might be possible for physics to be wrong and that's what Yud is betting on. I went back to the article where Yud says this, and found a new type of crazy in the first comment: Johnicolas posted:You reference a popular idea, something like "The integers are countable, but the real number line is uncountable." I apologize for nitpicking, but I want to argue against philosophers (that's you, Eliezer) blindly repeating this claim, as if it was obvious or uncontroversial. Uh, it is, at least to the extent that anything involving infinities can be said to be "obvious". For all Yudkowsky's blind appropriation of misunderstood concepts from mathematics, "R is uncountable" is true. Johnicolas posted:Yes, it is strictly correct according to current definitions. However, there was a time when people were striving to find the "correct" definition of the real number line. What people ended up with was not the only possibility, and Dedekind cuts (or various other things) are a pretty ugly, arbitrary construction. Dedekind cuts are rather ugly and unintuitive if you haven't seen them before. But they're hardly the only way to construct the reals from the rationals. For example, Cauchy completeness is a very natural one, is not at all ugly, is much more commonly used, generalizes easily to other spaces, and gives the same result. When you get several completely different constructions all producing the same result, the result isn't arbitrary. Johnicolas posted:The set containing EVERY number that you might, even in principle, name or pick out with a definition is countable (because the set of names, or definitions, is a subset of the set of strings, which is countable). In a follow-up comment, he explains further that we need to stop using the real numbers and start using the "nameable numbers" instead. It's not clear how you would rigorously define such a set or why it would be desirable to do so; this is like a set-theoretic version of Yud's "0 and 1 aren't probabilities" thing. In fact, with even a small amount of mathematical background. it is quite obvious that such a set cannot be well-defined. If S is the set of "nameable numbers" and isn't all of R, then either Cantor's Diagonal Argument or the Well-Ordering Theorem can be used to deterministically name a number not in S. Johnicolas posted:The Lowenheim-Skolem theorem says (loosely interpreted) that even if you CLAIM to be talking about uncountably infinite things, there's a perfectly self-consistent interpretation of your talk that refers only to finite things (e.g. your definitions and proofs themselves). Here's a handy tip for armchair well-I-know-I'm-quite-clever people: if you ever find yourself "loosely interpreting" a mathematical theorem, stop. Anything you write based on your loose interpretation will almost surely be as accurate as a portrayal of quantum physics in a sci-fi b-movie. For one thing, Lowenheim-Skolem only applies in first-order logic, and its statement is provably false if extended to other systems. We are perfectly capable of using languages outside of first-order logic - hell, you can't even talk about the notions of "infinite" or "countable" in first-order logic - so restrictions on what can be said in first-order logic don't actually restrict what we ourselves can say. For another thing, Lowenheim-Skolem does not say that "there's a perfectly self-consistent interpretation of your talk that refers only to finite things". What it says is that every theory with an infinite model has a model of every infinite cardinality - that is, for any set of rules you write down that hold true in some set of infinite size, there is a set of every infinite size in which they hold true too. Just about the only thing it doesn't say is that there's a finite model. Did you even read the theorem? On a more technical note, the proof of Lowenhein-Skolem requires the axiom of choice, which states that given any (possibly infinite) collection of nonempty sets, it is possible to choose one element from each set. It seems obviously true and is almost universally accepted in mathematics, but it has a couple of unusual consequences (the well-ordering theorem and the Banach-Tarski theorem) that feel odd and raise eyebrows. Johnicolas's line of argument suggests that he disagrees with the axiom of choice - after all, if you can only write down finitely many strings, you can't make uncountably many distinct choices as required. But if he rejects the axiom of choice, he can't use Lowenheim-Skolem. He later clarifies what he means: Johnicolas posted:Mathematicians routinely use "infinite" to mean "infinite in magnitude". For example, the concept "The natural numbers" is infinite in magnitude, but I have picked it out using only 19 ascii characters. From a computer science perspective, it is a finite concept - finite in information content, the number of bits necessary to point it out. First of all, what the hell does this have to do with Lowenheim-Skolem? You're not talking about the size of the model anymore, you're talking about the size of the theory. "Anything that can be expressed as a finite string can be expressed as a finite string" isn't a theorem or an insight, it's a tautology. Second of all, the real numbers are "finite" under this definition anyhow, so how does any of this connect in any way to your original point? Johnicolas posted:You don't get magical powers of infinity just from claiming to have them. Standard mathematical talk is REALLY WEIRD from a computer science perspective. A mathematician picks up a piece of chalk. Upon the blackboard he writes: "2+2=4". Suddenly, the door bursts open and a computer scientist charges in. "LIES!" he shouts. "You write about twos and fours, but you don't have two of anything! You don't have two of anything! You don't have four of anything!" He pulls out a monogrammed handkerchief and wipes the sweat from both his chins. "You do not magically conjure matter merely by writing letters on a page! They are nothing more than symbols! You have two of nothing, you have four of nothing, you have nothing! No matter how many symbols you invent, in the end you will have nothing!" He begins panting, and tosses his trenchcoat to the floor to cool down. "If writing a number on the board could physically summon it, you would be making something from nothing! You'd violate Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?" He collapses. The cause of death is determined to be a stroke. Johnicolas posted:I have a knee-jerk response, railing against uncountable sets in general and the real numbers in particular; it's not pretty and I know how to control it better now. Lottery of Babylon fucked around with this message at 17:49 on Oct 5, 2014 |
# ? Oct 5, 2014 08:31 |
|
Honestly, the whole deal with "Friendly AI" and MIRI's work on it just comes across to me like a bunch of iron-age blacksmiths desperately trying to figure out what safety measures you'd need in a modern steel foundry, having no idea how such a thing would function, or even how to make steel. Empty guesses and wasted effort, rationalized only by the perceived danger.
|
# ? Oct 5, 2014 08:42 |
|
Lottery of Babylon posted:A mathematician picks up a piece of chalk. Upon the blackboard he writes: "2+2=4". I know emptyquoting is discouraged, but god drat that is funny. Also well done explaining how wrong that guy is, sometimes the wall of verbiage can get a bit dense.
|
# ? Oct 5, 2014 11:33 |
|
I feel insulted. Yudkowsky is not a computer scientist. He isn't a scientist of anything, he doesn't even have a degree.
|
# ? Oct 5, 2014 14:02 |
|
Cardiovorax posted:I feel insulted. Yudkowsky is not a computer scientist. He isn't a scientist of anything, he doesn't even have a degree. He's more of a computer sophist, really.
|
# ? Oct 5, 2014 18:24 |
|
Let's (Less?) Read: An Alien God This one is about evolution. Yud starts out by talking about the apparent purpose/design in living things. quote:The Goddists said "God did it", because you get 50 bonus points each time you use the word "God" in a sentence. I don't think that's why people argued for a divine creator, but I don't feel like reading three huge articles. Let's just stick to this one. Okay, people were dumb because of religion. Next? quote:Foxes seem well-designed to catch rabbits. Rabbits seem well-designed to evade foxes. Was the Creator having trouble making up Its mind? [...] The ecosystem would make much more sense if it wasn't designed by a unitary Who, but, rather, created by a horde of deities—say from the Hindu or Shinto religions. [...] I wonder if anyone ever remarked on the seemingly excellent evidence thus provided for Hinduism over Christianity. Probably not. "Has anyone in the thousands of years of theological debate brought up this fairly basic fact? I didn't think so " He goes on to point out that X-Men is unrealistic, because real mutations don't have a 'purpose'. He quotes someone who says what he's saying with way fewer words: quote:"Many non-biologists," observed George Williams, "think that it is for their benefit that rattles grow on rattlesnake tails." Again using many more words than necessary, he explains that evolution only works through a bunch of small mutations, and it only works when each mutation improves reproductive fitness. All right, this is evo bio 101, but I liked evo bio 101. quote:Why is Nature cruel? You, a human, can look at an Ichneumon wasp, and decide that it's cruel to eat your prey alive. You can decide that if you're going to eat your prey alive, you can at least have the decency to stop it from hurting. It would scarcely cost the wasp anything to anesthetize its prey as well as paralyze it. He also cites that elephants have a finite number of teeth, so old elephants die of starvation after they lose their last set of teeth. quote:If you were talking to a fellow human, trying to resolve a conflict of interest, you would be in a good negotiating position—would have an easy job of persuasion. It would cost so little to anesthetize the prey, to let the elephant die without agony! Oh please, won't you do it, kindly... um... He then rails against 'amateur evolutionary biologists' who are 'making predictions' about how humans could design animals more ideally, and his example for this seems to be the old idea that a modern human designing taste buds would make whatever nutrients our bodies need most be the tastiest. Basically, he uses more words to talk about how evolution doesn't have an end goal. quote:Find a watch in a desert, said William Paley, and you can infer the watchmaker. There were once those who denied this, who thought that life "just happened" without need of an optimization process, mice being spontaneously generated from straw and dirty shirts. This is the weirdest argument I've seen in a while. "People who believed in intelligent design are less wrong () than people who believed in spontaneous generation." He says that this is because evolution, as a process guided by natural selection, is non-random, so that's closer to a god than to randomness. quote:In a lot of ways, evolution is like unto theology. "Gods are ontologically distinct from creatures," said Damien Broderick, "or they're not worth the paper they're written on." And indeed, the Shaper of Life is not itself a creature. Evolution is bodiless, like the Judeo-Christian deity. Omnipresent in Nature, immanent in the fall of every leaf. Vast as a planet's surface. Billions of years old. Itself unmade, arising naturally from the structure of physics. Doesn't that all sound like something that might have been said about God? Makes sense that theories are like gods in Yud's mind. quote:Evolution is not a God, but if it were, it wouldn't be Jehovah. It would be H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything, surrounded by the thin monotonous piping of flutes. Oh wait, Lovecraft? This should go in the TV Tropes thread then. quote:So much for the claim some religionists make, that they are waiting innocently curious for Science to discover God. Science has already discovered the sort-of-godlike maker of humans—but it wasn't what the religionists wanted to hear. They were waiting for the discovery of their God, the highly specific God they want to be there. They shall wait forever, for the great discovery has already taken place, and the winner is Azathoth. In essence, what he's trying to say is that evolution is a non-random process that doesn't take human morality into account, and he's trying to draw parallels between that and concepts of gods (with the underlying message that an AI might act similarly to evolutionary processes). Getting his point across though, he trips over his smug superiority and falls face-first into a dump truck full of useless words. Verdict: Not technically wrong, but annoying anyway.
|
# ? Oct 5, 2014 19:06 |
|
Less Wrongites accepting that "natural" does not equal "good" is something I agree with in their ideology. Or it was part of their ideology, until the Dark Enlightenment nerds created Gnon, which is their word for "divine law" except totes scientific and Less Wrong took it seriously because the idea that fascists who go on about Machiavellian politics and Nietzschean Will To Power could be disingenuous is beyond them. The Dark Enlightenment ruins a lot.
|
# ? Oct 5, 2014 21:11 |
The dark enlightenment is that whole thing which is basically, "Well, I grew up around shitheads, and now I am well educated; I will use my cleverness, and concepts from that education, to justify the poo poo-headery of my birth, more or less point for point. Except maybe some tiny bits, like God or marijuana." Right? Or is it just the racism?
|
|
# ? Oct 5, 2014 21:37 |
|
|
# ? May 4, 2024 11:33 |
|
Nessus posted:The dark enlightenment is that whole thing which is basically, "Well, I grew up around shitheads, and now I am well educated; I will use my cleverness, and concepts from that education, to justify the poo poo-headery of my birth, more or less point for point. Except maybe some tiny bits, like God or marijuana." Right? Or is it just the racism? Dark Enlightenment is if you took every terrible misuse of statistics in The Bell Curve (HBD, "scientific" racism), mixed in some Red Pill and general manosphere bullshit (some DE people are against PUA stuff because someone might use it on their white women oh noes), and combined it with the general nerd tendency of seeing that the world is messed up, and deciding that it's because all those stupid humanities people or - gasp - people who didn't even go to college!!! - are allowed to have a voice in how their country is run.
|
# ? Oct 5, 2014 21:44 |