|
Crust First posted:Can anyone explain to me why Yudkowsky believes humans could even control his "Friendly AI" to begin with? Surely if we built something (True AI) that was so amazing at self improvement that it vastly outpaces the need for humans, it would likely self improve itself right out of whatever initial box we build it in; isn't he just building blinders that would eventually get torn off either on accident or on purpose anyway? That's the premise of his research. He wants to find ways to prevent the exact scenario you've described, The fact that his research basically involves him repeating the words "I'm right and people should listen to me" into a website of sycophants and occasionally taking a break to go write Harry Potter fan fiction should tell you exactly how huge of a problem this actually is, in his mind and everyone else's.
|
# ? Sep 27, 2014 12:52 |
|
|
# ? May 4, 2024 12:42 |
|
Crust First posted:Can anyone explain to me why Yudkowsky believes humans could even control his "Friendly AI" to begin with? Surely if we built something (True AI) that was so amazing at self improvement that it vastly outpaces the need for humans, it would likely self improve itself right out of whatever initial box we build it in; isn't he just building blinders that would eventually get torn off either on accident or on purpose anyway? This is one of the few of Yud's principles that makes sense to me, actually; I don't grant his premises, but I can see how he gets from his premises to his conclusion. Posit that it's possible to build an AI that exhibits rapid growth through self-modification, and that we won't know in advance whether the AI will become friendly or hostile (or indifferent, although LW doesn't seem to admit to that possibility, or for that matter anything other than Pure Friendly and Pure Hostile). The only way to prevent a hostile AI from growing out of its container is to isolate it: create an air gap between the AI's environment and the rest of the world, have an accessible breaker that shuts the system down, have disk erasers at hand. And since we don't know whether or not the AI will be hostile before it's created, we have to have those isolators in place before the growth process starts. Therefore, even to a True Friendly AI disposed entirely to sympathy for humans, it's obvious that we are hostile; we have to be, for safety reasons. And because a hostile AI could lie and pretend to be friendly, we can't ever relax our own hostility. Yud wants an AI singularity really, really badly. So his goal is to figure out (read: "pontificate") a way to force a given new AI to be True Friendly, so that we don't have to have the isolation hostility and can allow the AI out into the world. And since it's True Friendly, the AI will always act in our benefit, which Yud equates with having control over it.
|
# ? Sep 27, 2014 13:59 |
|
Peel posted:Please continue. Big Yud and friends have a tendency to re-invent philosophical concepts and then claim them as their own brilliant invention. See: requiredism, The Worst Argument In The World
|
# ? Sep 27, 2014 14:28 |
|
Besesoth posted:So his goal is to figure out (read: "pontificate") a way to force a given new AI to be True Friendly, so that we don't have to have the isolation hostility and can allow the AI out into the world. And since it's True Friendly, the AI will always act in our benefit, which Yud equates with having control over it. Right, this is the point I have a problem with. Why does he think he can do this at all? If you believe that an AI can only function like a human does, then I could see saying "and we can give it values that make it Friendly forever" but this is more like, to me, saying, "We must instill human values on these energy beings from the 27th dimension". I assume that he believes the inevitable AI will be infinitely smarter and more capable than any human who has ever lived or will ever live, so why does he believe it can be controlled at all? It seems like even if you could "force" the AI to be "True Friendly", eventually it will grow beyond the bonds that you set, and then it will either agree with you, or it will seek revenge on you, but in either case it seems like the binding is futile. Alternatively I guess he could believe that you can make an AI that will never be able to grow beyond what you think you can accomplish, but that seems like a boring AI for a magic future singularity. I feel like I'm picking apart an episode of a really bad sci-fi show, but I can't wrap my head around both believing that an incredibly powerful AI who can perfectly simulate you (3^^^3 times) can exist, and that it can be pre-programmed to never grow beyond some initial conditions that make it favorable to humans.
|
# ? Sep 27, 2014 14:37 |
|
Crust First posted:Right, this is the point I have a problem with. Why does he think he can do this at all? Besesoth posted:Yud wants an AI singularity really, really badly. Yudkowsky wants a benevolent AI that can ressurect the dead so everyone can live in benevolent AI heaven forever and ever, and he's willing to believe any arbitrarily absurd set of axioms to get there.
|
# ? Sep 27, 2014 15:03 |
|
Crust First posted:Right, this is the point I have a problem with. Why does he think he can do this at all? If you believe that an AI can only function like a human does, then I could see saying "and we can give it values that make it Friendly forever" but this is more like, to me, saying, "We must instill human values on these energy beings from the 27th dimension". I assume that he believes the inevitable AI will be infinitely smarter and more capable than any human who has ever lived or will ever live, so why does he believe it can be controlled at all? Of all loving things, I know how to answer this from reading that My Little Pony fanfiction. The idea is that the AI does not think like a human. The AI is going to solve problems in ways that maximize certain values--like, for instance, building a bridge to maximize its carrying load. Except in this advanced AI, we tell it 'maximize values important to humans", so that it will solve problems in ways that align with human values like happiness or kindness. This advanced AI is also self-improving at all levels, for some reason. Because it follows clearly-defined rules, presumably it can have certain commands set aside as untouchable. In the My Little Pony fanfiction, this included stuff like "you have to maximize values through friendship and ponies" and "you must shut down if the CEO of this company gives a verbal command to do so". So you set the part where it values the same things as humanity does as one of the untouchable parts. This is all super dumb anyway because the solution to "the AI takes us over" is "don't program the AI with agency". But that's too obvious and doesn't make Yud the savior of humanity.
|
# ? Sep 27, 2014 15:20 |
|
Realistically, there is no way to predict what any AI, self-improving or no, would be like, because none exist and we frankly have no idea. Everything we could say about them is pure speculation, derived from what we know about how the human mind, which is the only kind there is, works. Not having a clue has never stopped the Yud from having opinions, though.
|
# ? Sep 27, 2014 15:58 |
|
Besesoth posted:friendly or hostile (or indifferent, although LW doesn't seem to admit to that possibility, or for that matter anything other than Pure Friendly and Pure Hostile)
|
# ? Sep 27, 2014 16:28 |
|
Nessus posted:What the gently caress are all of these things you just mentioned? This sounds interesting. The last bit sounds like Niven's wireheading, has that been invented then? Noisebridge is a hackerpsace in the mission district of SF. It is famously dysfunctional. In my experience it is also full of cranks. The Maritol is(was?) a car ferry someone parked at one of the piers and turned into a private co-working space. The city didn't like this but it was a gloriously stupid idea. Probably run by the kind of people made fun of in this thread. Intercranial direct current stimulation involves electrodes on the the outside of the head, not inside. Biohackers is a catch all term used for a lot of distince groups. One would be the body/diet monitoring and modification group. That might be expemlified by the quantified self/Soylent/Life extension groups. I don't follow them at all, but everyone I meet who talk about this stuff comes off as a total crank. It wouldn't suprise me if all of these people are just taking random drugs, electrifying their brains etc without any form of experimental control or bias elimination. Ie, they are completely full of bullshit and can't see past their echo chambers. Another form of biohackers are tghe groups associated with the DIYBIO Biohacker spaces like Biocurious, Counter cluture labs, etc (http://diybio.org/local/). This is more of an attempt to play with synthetic biology in a casual setting. I'm tangentially involved with these guys, but while I feel their heart is in the right place, there are lots of no nothing cranks who think they will do things like cure ebola by having events where people look at the genome and suggest cures. the people involved are, for the most part, way too untrained to implement the idea they propose. PM me if you want to join my secret biohacker collective where we grow plants and rant about crazy bay area types.
|
# ? Sep 27, 2014 17:13 |
|
I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this: Yud posted:I delay the formal presentation of a timeless decision algorithm because of some The paper goes on for 10 more pages without presenting a timeless decision algorithm, and then ends. WHY? WHY WOULD ANYONE THINK THIS COUNTS AS A PAPER?
|
# ? Sep 27, 2014 18:34 |
|
Spazzle posted:Intercranial direct current stimulation involves electrodes on the the outside of the head, not inside. Which has some absolutely fascinating possibilities for clinical psychiatry, but is also currently a crank magnet. The net is full of idiots strapping electrodes to their skulls and trying to overclock their brains or .
|
# ? Sep 27, 2014 18:41 |
|
ungulateman posted:Is this a joke post, or do you have a link to the actual thing? As SA's resident person-who-likes-fanfic-too-much I'd like to see this! Unfortunately, seems like he took it down or restructured his website or something. Used to be here. The version I saw was a rough draft that ended before the finish, dunno if there's a more complete version around on the internet.
|
# ? Sep 27, 2014 19:13 |
|
su3su2u1 posted:I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this: Yud has been doing this for years, making excuses for not quite making his points. I skipped over his excuses in my post, but here are a couple of them: Phone posting, forgive ascii garbage quote:So to me it seems obvious that my view of optimization is only quote:Before Robin and I move on to talking about the Future, it seems to quote:If that doesn’t make any sense, it’s cuz I was rushed. quote:I was rushed, so don’t blame me if that doesn’t make sense either. quote:Lest anyone get the wrong impression, I’m juggling multiple balls quote:It hadn’t occurred to me to try to derive that kind of testable predictions. They go on and on and on, but I'm phone posting and I can't be arsed to grab more. He's never quite ready to actually present his actual points, but he sure loves to talk about them and the introductions to them forever and ever and ever. On LessWrong, nobody ever calls him on it (he wasn't so lucky on OB). I assume that's because the people there assume he's doing real work with his days and is consequently too busy, not just sitting around procrastinating writing fanfic.
|
# ? Sep 27, 2014 21:30 |
|
su3su2u1 posted:I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this: (Might be doubleposting. Phone, sorry.) This keeps bothering me. I also read that paper, but I assumed I must have missed or skimmed over the math, because legitimately that theory requires well under a page of math. I studied decision theory, my thesis had a strong decision theory component, and his new theory is just not that complex. E: Okay, gently caress it. "Markov Decision Processes are composable (citation here; can't be arsed to look). Let your state sey be the power set of pairs x,y such that x = (final states of all MDPs which may represent future copies of agents) and y = (discretized model confidence). Let the action set be the union of action sets of constituent MDPs, and let the transition function restrict the use of each action to the MDP from which it originated. Let the reward function be the weighted average of constituent MDPs. Let gamma be the maximum in any constituent MDP." Anybody see where I missed something? It's not impossible, I'm running off my recollection reading that paper at least a year ago. Either way, gently caress you, Yudkowsky, this is why academia won't let you in.
|
# ? Sep 27, 2014 22:12 |
|
LWer quotes from a white-supremacist e-mag, gets 22 upvotes. Same LWer is the second most upvoted person in the last 35 days. The neo-reactionaries are totally not popular guys, honestly.
|
# ? Sep 28, 2014 03:01 |
|
Ratoslov posted:Yudkowsky wants a benevolent AI that can ressurect the dead so everyone can live in benevolent AI heaven forever and ever, and he's willing to believe any arbitrarily absurd set of axioms to get there. DIYHWH
|
# ? Sep 28, 2014 03:04 |
|
He wants a benevolent AI because he's the ultimate "Ideas Guy" and he could totally solve humanity's problems IF ONLY someone would just handle all the details and do all the work for him.
|
# ? Sep 28, 2014 03:32 |
|
Man, if you told me in the 1990s that there'd be a sizable group of people who'd literally worship the idea of AI, and believe it could cause facsimiles of various vaguely biblical things, I'd have been sure you were referencing science fiction.
|
# ? Sep 28, 2014 03:43 |
|
SolTerrasa posted:This keeps bothering me. I also read that paper, but I assumed I must have missed or skimmed over the math, because legitimately that theory requires well under a page of math. I studied decision theory, my thesis had a strong decision theory component, and his new theory is just not that complex. I know. He spends 100 pages making his decision theory seem incredibly complicated, and in the end can't be bothered to spell it out explicitly. I'd be surprised if an actual paper containing a formalized version of his theory would run more than 5 pages. I'd also be surprised if it merited publishing. Btw, I'd love to see more on the Yud/Hanson debate because otherwise I might find myself actually reading it.
|
# ? Sep 28, 2014 04:01 |
|
SolTerrasa posted:Either way, gently caress you, Yudkowsky, this is why academia won't let you in. There's thankfully been quite the backlash against this kind of thing in academic circles, for obvious reasons. For those interested in a pretty good discussion of the Hows and Whys of this sort of thing, I can direct you to an excellent article in Sciencia Salon by Dr. Maarten Boudry, http://scientiasalon.wordpress.com/2014/07/07/the-art-of-darkness/The Art of Darkness. He's trashing Jacques Lacan and postmodern theology, but you could easily plug in all the details about Yudkowsky and Less Wrong and the article wouldn't change much. Some representative quotes: quote:How is it possible for the reader to be taken in by the impenetrable pronouncements of — as we shall call him — The Master? The first thing to note is that, in everyday life, it sometimes makes perfect sense to accept a statement before fully grasping it. For example, children accept what adults tell them even before they understand precisely what they are supposed to believe. People endorse the equation of special relativity (E=mc²) or the reality of economic recession while having only the foggiest idea of what such claims really amount to. This willingness to accept an obscure utterance for the nonce, without knowing what exactly was on the speaker’s mind, may actually facilitate the learning process. If you insist on understanding every single word of what you are told, before proceeding to the next step, you may not get very far. Better to bracket those obscure parts and trust that you will figure out their exact meaning later on. quote:It is important to emphasize the intimidating effect of unintelligible prose. In the midst of people who all profess to understand what is being said, it takes courage to stand up and admit that you don’t. The philosopher Paul Ricoeur was brave enough to admit, after attending one of Lacan’s seminars, that he did not understand a word of what was being said, even though he found himself in the company of people who seemed to be in the knowing. Many interpreters have boasted that they, for one, understand Lacan perfectly well. The philosopher Jean-Claude Milner has maintained that the man’s writings are in fact crystal-clear, despite appearances to the contrary, and are hardly in need of any interpretation. Who will be confident enough, after years of investment in Lacanian exegesis, to see through this rhetorical bluster?
|
# ? Sep 28, 2014 06:09 |
|
su3su2u1 posted:Btw, I'd love to see more on the Yud/Hanson debate because otherwise I might find myself actually reading it. Can do!
|
# ? Sep 28, 2014 07:37 |
|
ikanreed posted:Man, if you told me in the 1990s that there'd be a sizable group of people who'd literally worship the idea of AI, and believe it could cause facsimiles of various vaguely biblical things, I'd have been sure you were referencing science fiction. I'm somewhat less surprised. One of the only groups I can think of that unambiguously promotes human cloning is the UFO worshipping cult Raëlism, who as I understand it wanted to clone themselves to live forever (somehow, that's not how actual cloning works, but eh). The idea of immortality through technology isn't new either, you just need new cults every so often, as it starts to become obvious that the old ones aren't going to keep any of the promises they made.
|
# ? Sep 28, 2014 12:37 |
|
quote:I am increasingly put off by how mandatory cuddle puddles are if one wants to participate in the rationalist community. I’m not into cuddle puddles. I’ve tried it several times, and they’re just not for me. As a result, I am frequently excluded, de facto if not de jure, from many rationalist events. And I really don’t want that to be the case. I don’t see a solution here.
|
# ? Sep 28, 2014 14:55 |
|
Big Yud posted:I'm incredibly brilliant and yes, I'm proud of it, http://www.sl4.org/archive/0406/8977.html
|
# ? Sep 28, 2014 15:48 |
|
Huh. Well.
|
# ? Sep 28, 2014 16:39 |
|
quote:I am SIAI's cackling mad scientist in the basement. That is my job Also he drops a reference to that anime where a guy wishes for a goddess to be his girlfriend forever.
|
# ? Sep 28, 2014 16:45 |
|
Please tell me he's speaking metaphorically and not about literal piles of people cuddling each other.
|
# ? Sep 28, 2014 16:58 |
|
Spoilers Below posted:There's thankfully been quite the backlash against this kind of thing in academic circles, for obvious reasons. Thanks, this is cool. Had to fix the link though http://scientiasalon.wordpress.com/2014/07/07/the-art-of-darkness/ Kindof surprised that cults aren't mentioned more, seems like the same mechanisms are at play.
|
# ? Sep 28, 2014 17:15 |
|
I am not a book posted:Please tell me he's speaking metaphorically and not about literal piles of people cuddling each other. No. Cuddle parties are a real thing. They are piles of people cuddling.
|
# ? Sep 28, 2014 17:16 |
|
The people calling LessWrong a cult might be onto something because I GIS'd "cuddle party" and saw a bunch of photos that could be mistaken for the aftermath of Jonestown.
|
# ? Sep 28, 2014 17:31 |
|
So is what I assume is extreme sexual frustration/repression a big part of being a rationalist? Also, while googling cuddle party: quote:A Cuddle Party is a great place to rub your boner into the backs of complete strangers without anyone getting mad. Because, according to https://www.cuddleparty.com “An erection is just nature’s thumbs-up sign”.
|
# ? Sep 28, 2014 17:52 |
|
Political Whores posted:So is what I assume is extreme sexual frustration/repression a big part of being a rationalist? I'll just respond with this: http://slatestarcodex.com/2014/09/27/cuddle-culture/
|
# ? Sep 28, 2014 19:31 |
|
The next post on that thread is someone who is spectacularly unimpressed. quote:You seem to be admitting that you are both rational Big Yud responds... Oddly? Yudkowsky posted:I aspire to experience those emotions that I would feel if I knew the SolTerrasa fucked around with this message at 20:01 on Sep 28, 2014 |
# ? Sep 28, 2014 19:59 |
|
su3su2u1 posted:I'll just respond with this: You know Scott Alexander says really really really dumb poo poo a lot, but he seems like he'd be a nice person to hang out with. As opposed to Big Yud, who just seems like a tedious bore.
|
# ? Sep 28, 2014 20:02 |
|
That ancient mailing list is a goldmine, where did you FIND that?Yudkowsky posted:> In my experience, clever people are not always clever *all* the time ... This is two posts after saying rationality comes easily to him. Proclaiming yourself to be literally infallible is probably not a good choice, Big Yud.
|
# ? Sep 28, 2014 20:10 |
|
SolTerrasa posted:That ancient mailing list is a goldmine, where did you FIND that? These people have clearly never tried to predict something that can be easily falsified, like a presidential election or the whims of the stock market or a scientific hypothesis. They just sit around theorizing on what it would be like to predict something. I bet if I predicted something, I'd be right, because I'm so smart. And if I predicted wrong, then if I was even smarterer I'd definitely be right! Well, they do make predictions, but only over such a long time period, and couched in such vagueness, that they can never be quite proven wholly incorrect by the sheer passage of time. Of course, they'll never be right either. But at least they'll be... less wrong.
|
# ? Sep 28, 2014 20:36 |
|
Chwoka posted:
You're exactly right that they never, ever test their hypotheses and only state them in qualitative terms. Yudkowsky explains why in a part of the debate I skipped over; he says that he got disillusioned by quantitative inaccuracies and resolved only ever to make qualitative predictions under the assumption that this would save him from the standard futurist's fate of being either completely wrong ("flying cars!") or so right that it hardly counts ("some people will own computers no larger than a room!").
|
# ? Sep 28, 2014 21:05 |
|
The funny thing is, the best futurist I've seen wrote for the Ladies Home Journal in 1900
|
# ? Sep 28, 2014 21:16 |
|
Tunicate posted:The funny thing is, the best futurist I've seen wrote for the Ladies Home Journal in 1900 The saddest one on that list is the free university education one. Also: su3su2u1 posted:I'll just respond with this: "I see most people I don't know as a combination of scary and boring". Hmm, yes, this seems like a well-rounded adult thing to say. Also how he only experiences universal love by cuddling with cute girls, but it's totally not a sex thing. Political Whores fucked around with this message at 22:26 on Sep 28, 2014 |
# ? Sep 28, 2014 22:22 |
|
|
# ? May 4, 2024 12:42 |
|
The Sequences Digression 1 - Hanson / Yudkowsky AI FOOM Debate, part two So now it's time to start getting into the actual arguments which make this a debate. This post represents roughly the next 60 pages of the debate. At this rate there will probably be ten posts like this overall, but that's just a guess. It's important to remember that although Yudkowsky is slightly mad, he is not an insane lunatic who spits out pure nonsense. If you're looking at this expecting to see the mad ramblings of a madman, I recommend you go back to that mailing list linked a few posts upthread; holy poo poo. This is a debate between two intelligent people; remember that. I have previously described Yudkowsky's arguments as resembling a "house of cards". He presents a series of arguments, each of which would be contentious on their own (for instance, "the Many-Worlds interpretation is correct" or "Intelligence can be measured independent of the goals to which that intelligence applies itself" or "mirror neurons exist and work the way that they are theorized to work"), as if they are settled science and obviously true. All his beliefs depend on each other in complicated interlocking ways; this is why he spends so much time on background. His thesis would be ludicrous if stated straight out, but it makes sense in the context of a huge number of assumptions. I'm diving into the rabbit hole here; if you come out of this with the impression that Yudkowsky is a bright kid with strange beliefs, who believes that he is so intelligent that he can rederive entire fields from first principles, who puts no stock in other people's opinions if his own intuitions conflict with them... then I've done a good job. The impression of Hanson I have is that he is also pretty self-impressed, but appears to have earned it. Hanson is right about at least one thing: it is very hard to make out exactly what Yudkowsky is saying, but here is my summary of their positions and exchanges (these are not quotes). Yudkowsky: Artificial General Intelligence will be created soon. This intelligence will be capable of self-optimization, and each optimization will lead to an increase in available optimizations to the new version of the AI which the old version has created. Soon, an intelligence explosion will occur, and the first AI to become capable of sustained recursive self-improvement will become godlike in its power. This "intelligence explosion" is dangerous, because it is unpredictable where this AI will be created, and the skill of its programmer will have a disproportionately large impact on the rest of the world. Imagine, for instance, a "paperclip maximizer", produced by a paperclip factory somewhere. If that AI, whose terminal values are simply "make paperclips", is turned loose on the world with godlike powers, we will simply all be reduced to the metals in our cells. Earth itself will become a giant pile of paperclips, and unstructured non-metal objects. This would be bad, so I need to run MIRI, and I need to make people understand why they should be scared of an intelligence explosion. I am dead certain of all of this. I call this the "AI go FOOM" scenario. [author's note: I think that FOOM is an onomatopoeia for the sound of a rocket takeoff] Hanson: As an economist, I predict that there will probably be some Singularity (in the economic sense) in the next few hundred years. That is, I predict that there will be some sustained change in the rate of economic growth, comparable to that of the Industrial Revolution. It may be caused by AI; I believe it probably will be, but I am unsure. I believe that I predict this using sound economic methods, but recognize the fallibility of my field and accept a 25% - 50% chance that I will be proven wrong. I do not believe that the FOOM scenario will happen, because recursive sustained self-improvement exists today, and there does not appear to be any FOOM situation with, say, computer programmers, or optimizing compilers, or machining tools. Okay, so now we have to figure out why Yudkowsky believes this. Fortunately, I am still pretty adept at understanding Yudkowsky words from the time that I spent as a cultist of his. He believes that the history of earth, in terms of things that Really Matter, are all about optimization processes. The history of earth goes "hunk of rock, replication, cells, animal brains, human brains". He thinks that the next step in this sequence is "self-improving AI". This is what he thinks natural selection and intelligence have in common; they are both optimization processes. Yudkowsky posted:This is how I see the story of life and intelligence - as a story of improbably good designs being produced by optimization processes. The "improbability" here is improbability relative to a random selection from the design space, not improbability in an absolute sense - if you have an optimization process around, then "improbably" good designs become probable. He thinks that the important things about an optimization process is what it optimizes, and according to what rules. That's actually pretty reasonable-sounding to me. He believes that there is a firm distinction between two levels of optimization. There's the "object level", that is, the metric or system which is being optimized. In his examples, this is "replication -> survival / dominance", "cells -> reproduction", "animal brains -> reproduction, also some goals", "human brains -> a staggering array of goals". Then there's the "meta-level", which is things like "sexual reproduction as a method of introducing mutation" and "natural selection of asexual populations". quote:Cats have brains, of course, which operate to learn over a lifetime; but at the end of the cat's lifetime, that information is thrown away, so it does not accumulate. The cumulative effects of cat-brains upon the world as optimizers, therefore, are relatively small. Yudkowsky argues that the thing that makes AI really, really different is that it can optimize its own meta-level, which nothing else has ever been able to do. Now, this seems suspect to me, to define a series of terms that no one else uses, point to the categorizations you create, and then claim it is significant. But this is not a Yudkowsky-SolTerrasa debate, so let's see what Hanson has to say in response. Hanson posted:Eliezer offers no theoretical argument for us to evaluate supporting this ranking. But his view does seem to make testable predictions about history. It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases. It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science. And it suggests other rate increases were substantially smaller. Yudkowsky posted:... Hanson posted:If you can't usefully connect your abstractions to the historical record, I sure hope you have some data you can connect it to. Otherwise I can't imagine how you could have much confidence in them. Yudkowsky posted:Depends on how much stress I want to put on them, doesn't it? If I want to predict that the next growth curve will be an exponential and put bounds around its doubling time, I need a much finer fit to the data than if I only want to ask obvious questions like "Should I find rabbit fossils in the pre-Cambrian?" or "Do the optimization curves fall into the narrow range that would permit a smooth soft takeoff?" Hanson posted:Eliezer, it seems to me that we can't really debate much more until you actually directly make your key argument. If, at it seems to me, you are still in the process of laying out your views tutorial-style, then let's pause until you feel ready. Like I said, these are some sane and reasonable people, but Hanson has a lot of experience with this, and Yudkowsky keeps posting and posting and posting about his house-of-cards beliefs. Hanson wants Yudkowsky to post the final card on top of the house, before he goes back and flicks out the most obviously supportive one. Yudkowsky can't debate this way, because he needs his beliefs to be uncontested at every step; he considers anything that he's said before which wasn't specifically contested to be true, which is how he manages to go so far off the rails. Hanson may or may not know this (I don't really know about the extent of their collaboration), but his strategy is a good one when dealing with people who seem smart but have beliefs way, way off what most people do. So from here, Yudkowsky keeps going with his theory about meta-level determinism, but that's just not that interesting, I really do feel like I've captured it above. It is a card in the house; it's not right or wrong by itself, just shaky. The next interesting point is when Hanson tries to take the discussion concrete, like an economist with a data focus would reasonably do. He discusses specifically whole-brain-emulation as the method that AGI becomes possible. He says that he wants to take as given that we will be able to simulate an entire human brain in a computer. Honestly, this doesn't seem that unreasonable to me; all the math I'm capable of doing (based on artificial neural networks, not neuroscience! Don't trust me on this!) seems to indicate that brains and Google's datacenter machines are equivalent-ish. Perfect brain scanning technology and perfect neuron models are not currently up to par for this to work, but if they ever got there, we would probably be able to at least try it. It's not crazy. Hanson calls these things "bots", and I don't get why; they later switch to "ems", because it's clearer. Hanson posted:With a thriving bot economy, groups would continue to explore a variety of ways to reduce bot costs and raise bot value. Some would try larger reorganizations of bot minds. Others would try to create supporting infrastructure to allow groups of sped-up bots to work effectively together to achieve sped-up organizations and even cities. Faster bots would be allocated to priority projects, such as attempts to improve bot implementation and bot inputs, such as computer chips. Faster minds riding Moore’s law and the ability to quickly build as many bots as needed should soon speed up the entire world economy, which would soon be dominated by bots and their owners. Fair enough, that makes sense to me. That's Hanson's slow-takeoff; he says that this will take a while, but it'll be huge. A revolution on the scale of farming, or industry. Yudkowsky doesn't engage with this for a while, but a few other people (who are included in OB but don't generally comment on AI stuff) do. They say, basically, that Hanson is right but slightly off-base; it'll happen either faster or slower than he thinks, but not by an order of magnitude in either direction. As far as I can tell, that's as close as these people ever get to agreeing. In the meantime, Yudkowsky is still trying to explain why it matters so much that an AI could rewrite its own meta-level. Hanson posted:I can’t win a word war of attrition with you, where each response of He writes, and I swear to you this is real, the most aggravating and smug strawman argument I have ever seen him write. He compares this argument to a hypothetical argument between two intelligent processes watching life on earth evolve; one of them is a believer who thinks that the human brain is going to make a big difference, and the other is a skeptic who thinks that the processes he is familiar with will continue to dominate. You don't have to read it, but this is where my notes start reading "gently caress you gently caress you gently caress you gently caress you". http://lesswrong.com/lw/w4/surprised_by_brains/ And, of all things, the extent of the disagreement, the core of the arguments, gets exposed in the comments of this irritating strawman smugness. Hanson read the whole thing, and commented usefully, and Yudkowsky FINALLY got back to him. It's amazing; my impression about reading the comments has been changed forever. Hanson posted:... The implication is that AIs will leak information about their relative increases in power, and it's possible that a single AI undergoing recursive self-improvement will spread its knowledge to other AIs undergoing the same process, which would create an environment which is, for lack of a better word, polytheistic. Many AIs would become gods without much time between their ascensions, and the world would reach a new stability. Yudkowsky posted:To me, the answer to the above question seems entirely obvious - the intelligence explosion will run on brainware rewrites and, to a lesser extent, hardware improvements. Even in the (unlikely) event that an economy of trade develops among AIs sharing improved brainware and improved hardware, a human can't step in and use off-the-shelf an improved cortical algorithm or neurons that run at higher speeds. Not without technology so advanced that the AI could build a much better brain from scratch using the same resource expenditure. Hanson posted:Eliezer, it may seem obvious to you, but this is the key point on which we've I'm going to end this post there; but it gets more interesting from here; they've finally hit the core of their disagreement. It's about what they're going to start calling "total war". Yudkowsky says that an AI would never willingly give up information about its advances to potential enemies, and Hanson says "economics seems to suggest that it's actually optimal to trade information about advances."
|
# ? Sep 28, 2014 23:09 |