|
Djeser posted:on a scale of yudkowsky to nite crew, how ironic is this anime we're talking about
|
# ? Oct 31, 2014 17:33 |
|
|
# ? Jun 8, 2024 20:51 |
|
Srice posted:nite crew is not ironic about enjoying anime, friend but nite crew posts in lower case, the puncutation of irony
|
# ? Oct 31, 2014 17:34 |
|
Djeser posted:but nite crew posts in lower case, the puncutation of irony in this post-post-irony world i can't even tell who's being ironic here
|
# ? Oct 31, 2014 17:35 |
|
irony, but ironically
|
# ? Oct 31, 2014 17:36 |
Anime is very serious business Also, don't let Yudkowsky out of the box
|
|
# ? Oct 31, 2014 17:36 |
|
Wow how did this thread get so good all of a sudden?
|
# ? Oct 31, 2014 17:51 |
|
Pavlov posted:Wow how did this thread get so good all of a sudden?
|
# ? Oct 31, 2014 17:55 |
|
Once MIRI reaches it's kickstarter stretch goal, they will definitely build a singularity waifu for each and everyone of us.
|
# ? Oct 31, 2014 18:40 |
|
|
# ? Oct 31, 2014 19:08 |
|
So is Ghost in the Shell Yudkowsky's ideal end state or his nightmare world?
|
# ? Oct 31, 2014 19:42 |
|
It's been a while since I've seen it, but it seems like it'd be closer to the ideal end state, given the lack of 'true' AI in the setting (that'd just be the Tachikomas, and even that by accident, right?). I can't imagine he'd be too upset with a setting where technoimmortality is a thing, but then again I don't really know the inscrutable ways of the Yudkowsky yet.
|
# ? Nov 3, 2014 02:25 |
|
LW-ite argues that sexism is literally empiricism
|
# ? Nov 3, 2014 02:40 |
|
My sexism and prejudices are justified by virtue of "where there's smoke there's fire." --Master Rationalist
|
# ? Nov 3, 2014 02:47 |
|
quote:I find this whole Dark Enlightenment/Neoreaction/Neopatriarchy development fascinating because it shows the failure of the progressive project to control the human mind. In the U.S., at least, progressives have a lot of control in centrally planning the culture towards Enlightenment notions of democracy, feminism, egalitarianism, cosmopolitanism, tolerance, etc. Yet thanks to the internet, men who previously wouldn't have had the means to communicate with each other have organized in a Hayekian fashion to discover that they have had similar damaging experiences with, for example, women in a feminist regime, and they have come to similar politically incorrect conclusions about women's nature. And this has happened despite the policies and preferences of the people who hold the high ground in education, academia, law, government and the entertainment industry. Haha, this is the essence of what somethingawful was built on mocking. People with horrible views being able to gather online and reenforce each others' bs. Doubly so in this case; since they're acutely aware of the phenomenon, but take it as a good thing.
|
# ? Nov 3, 2014 09:15 |
|
Epitope posted:I find this whole Dark Enlightenment/Neoreaction/Neopatriarchy development fascinating because it shows the failure of the progressive project to control the human mind.
|
# ? Nov 3, 2014 10:29 |
|
I got into an irritating and exhausing argument with a friend about AI not too long ago, and after it was over, I thought of this thread. Most of his points were faulty right out the box, but he said some stuff that I legitimately didn't know how to respond to, even though I have a Bachelor's in CS. Can anyone smarter and more experienced than I am take a crack at these (roughly paraphrased) arguments or point me to useful reading on the subject? -"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process." -"We don't know what causes consciousness/sentience, and computers are similar to human brains in that they operate on binary input/output seqeuences, so there's no reason to think that some combination of input couldn't cause them to become conscious." -"Through tools like fuzzy logic, computers are much better at handling abstract concepts than they used to be, which means they might develop concepts suchs as slavery and freedom." -"Hardware limitations are becoming less relevant because networked computers can use the resources of others, even without their permission (as with botnets)."
|
# ? Nov 14, 2014 22:49 |
|
Polybius91 posted:-"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process." And I guess he's claiming that human `rewrite their own code on the fly'? I'm not sure I even know what that means. Like there's a vague notion that you can sit down and if you don't know, I dunno, algebra you could learn it. But apart from just sorta insinuating that this process is similar to, or perhaps (more strongly) identical to, tacking a subroutine onto a running programme I really don't see the argument for this model. Put more strongly: I don't think humans learn by adding subroutines. Polybius91 posted:-"We don't know what causes consciousness/sentience, and computers are similar to human brains in that they operate on binary input/output seqeuences, so there's no reason to think that some combination of input couldn't cause them to become conscious." But I'm not sure I understand this line of argument. Is he arguing that any arbitrary mechanism that accepts input might, given the right input, spontaneously develop sentience independent of architecture? Because that's fairly astonishing. It implies that, for example, your garage door opener might suddenly become self-aware if you just manipulated the remote in the proper way. I think it also implies that any other creature that `operates on binary input/output sequences' similarly to the way humans do would become intelligent if exposed to the right input. So a very thoughtful conversation with a stoat, say, or a trout would produce intelligence. Polybius91 posted:-"Through tools like fuzzy logic, computers are much better at handling abstract concepts than they used to be, which means they might develop concepts suchs as slavery and freedom." Polybius91 posted:-"Hardware limitations are becoming less relevant because networked computers can use the resources of others, even without their permission (as with botnets)."
|
# ? Nov 14, 2014 23:14 |
|
Polybius91 posted:-"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process." Polybius91 posted:-"We don't know what causes consciousness/sentience, and computers are similar to human brains in that they operate on binary input/output seqeuences, so there's no reason to think that some combination of input couldn't cause them to become conscious." Polybius91 posted:-"Through tools like fuzzy logic, computers are much better at handling abstract concepts than they used to be, which means they might develop concepts suchs as slavery and freedom." Polybius91 posted:-"Hardware limitations are becoming less relevant because networked computers can use the resources of others, even without their permission (as with botnets)."
|
# ? Nov 14, 2014 23:25 |
|
Thanks for the responses. In retrospect, I already knew some of those things on some level and should've considered them, but they slipped my mind (or I had a hard time putting them into words) in the heat of an argument.
|
# ? Nov 14, 2014 23:39 |
|
Tell him that tools like fuzzy logic are what make argumentation like that possible.
|
# ? Nov 15, 2014 00:55 |
|
Addendum, wrt anonymous functions and self-rewriting code. This is pretty much an expansion of SubG's explanation featuring code samples. He probably meant first-class functions or macroing, by the way. A lot of languages including Lisp have a facility to substitute code into other code. (Lisp has more than one, in fact) In Haskell, for instance, you can write this: code:
You can also write code like this (this is actually much easier to write in Lisp, but I'm more familiar with Haskell and trust myself more that I won't screw it up): code:
So while it's technically true that, via substitution, you're writing self-modifying code, it doesn't say that the process to modify the code does anything useful or clever. (similar to SubG's comments). The intelligence is still somewhere else. We wouldn't say that makeReds is any smarter than find/replace -- in fact, since both Lisp and Haskell do this by representing the input code block as a list or nested list of instructions the same way you might represent text as a list of characters, it is literally find and replace.
|
# ? Nov 15, 2014 01:11 |
|
Krotera posted:A lot of languages including Lisp have a facility to substitute code into other code. (Lisp has more than one, in fact) In Haskell, for instance, you can write this:
|
# ? Nov 15, 2014 01:38 |
|
SubG posted:And it's worth noting that the idea that permitting self-modification is some kind of advanced feature of advanced languages for doing advanced poo poo is more or less precisely backwards. Twiddling around with instructions in memory is like the number one stupid assembly trick. And is something that literally every 8 bit micro could do in BASIC with a bunch of judicious (or injudicious) POKE statements. Implementing protection against doing this accidentally is, historically, the advancement. And getting around those protections is, historically, one of the largest vectors for computer security compromises (buffer overflows). Yeah: I'm overall sympathetic to the feature -- I think substituting code into other code is very useful and usually not too dangerous -- but it's generally very hard to make it safe and Lisp's take isn't particularly strong that way. It's a default condition and making it usable requires more than just restoring how it works by default. The semantics surrounding self-modification in Lisp or especially Haskell are completely different from the kind of semantics surrounding it in native code doing nothing funny. (You could, grossly abusing the word "creative," say that i.e. C allows for much more creative self-modification than Lisp does!)
|
# ? Nov 15, 2014 01:47 |
|
Polybius91 posted:I got into an irritating and exhausing argument with a friend about AI not too long ago, and after it was over, I thought of this thread. Most of his points were faulty right out the box, but he said some stuff that I legitimately didn't know how to respond to, even though I have a Bachelor's in CS. Can anyone smarter and more experienced than I am take a crack at these (roughly paraphrased) arguments or point me to useful reading on the subject? I wanted to throw my 2 cents in on this one. Your friend isn't even identifying the right old technology. Storing programs in the same memory as data (von Neumann architecture, dates from the 1940s) is what opens up the possibility of programs modifying themselves or other programs, not LISP or any other specific language. There's no doubt this is a useful tool to have in the box, and not just for AI: you need it to do dynamic linking, JIT compilers, and many other things. However, we haven't got the foggiest idea how to implement humanlike cognitive or learning processes on top of it. Similarly, though we understand some things about neurons (biology's self-malleable building blocks), we don't know much about how they work together as the substrate of human intelligence. There is simply a giant gap in knowledge here. Put another way, taking credit for self-modifying code as a major AI milestone is a bit like taking credit for figuring out how to burn tree branches as a major step towards modern industrial civilization. Significant, essential? Sure, you can always say that about baby steps. But they're still just baby steps.
|
# ? Nov 16, 2014 08:11 |
|
All of what's been said is true. In addition, one essential response to anyone who suggests pretty much anything about singularities is that we have no idea how to implement humanlike cognition at all. We have no reason to believe that doing so will be a "one weird trick" sort of thing, and a lot of reasons to believe that it won't. There's a lot of subfields of AI, and almost all of them have hard unsolved problems; we're not going to magically discover it all sometime soon. There's too many pieces that need to come together.
|
# ? Nov 16, 2014 10:07 |
|
BobHoward posted:I wanted to throw my 2 cents in on this one. Your friend isn't even identifying the right old technology. Storing programs in the same memory as data (von Neumann architecture, dates from the 1940s) is what opens up the possibility of programs modifying themselves or other programs, not LISP or any other specific language. I'd say self modifying code is just the tool a hypothetical strong AI would use rather than what actually gets the job done. Imagine if you knew how to modify your brain, change the way you think at will. That ability would be useless to you without the means to identify what would be a meaningful, useful change. Why are people assuming that just because it's an AI it knows how to code, or really understands how its own hardware works? We barely know anything about our own brains and we've had them for thousands of years.
|
# ? Nov 16, 2014 13:02 |
|
Slime posted:I'd say self modifying code is just the tool a hypothetical strong AI would use rather than what actually gets the job done. Imagine if you knew how to modify your brain, change the way you think at will. That ability would be useless to you without the means to identify what would be a meaningful, useful change. Why are people assuming that just because it's an AI it knows how to code, or really understands how its own hardware works? We barely know anything about our own brains and we've had them for thousands of years. It's a bit simplistic, but at least it isn't categorically nonsense, like some of the stuff these people come up with. It just relies on the premise that you can intentionally design an intellect that's greater than your own, which is neither provably right nor wrong.
|
# ? Nov 17, 2014 15:45 |
|
Who wants some more nuance in their lives?
|
# ? Nov 17, 2014 19:17 |
|
quote:The rationality movement isn’t about epistemology. Literally "we don't actually care about studying knowledge or anything like that, we just pretend we do because we think it makes us look cool."
|
# ? Nov 17, 2014 20:33 |
|
Cardiovorax posted:It's a bit simplistic, but at least it isn't categorically nonsense, like some of the stuff these people come up with. It just relies on the premise that you can intentionally design an intellect that's greater than your own, which is neither provably right nor wrong. And of course there's the possibility that the whole concept of intelligence is so broadly defined that it has no specific referent, so we can faff around concocting definitions that help whatever argument we wish to make but little more. If some researcher was really, really into dogs and announced that he was on the verge of creating the first transdogian entity because he could make a dog that's more doggy than dogness...well, you know maybe he has knocked together some special-use definition of `dog', `dogness', and so on where this makes some sort of sense. But that doesn't mean that it's not all just hollow wankery that doesn't have much to do with anything else.
|
# ? Nov 17, 2014 23:24 |
|
If it's purely a measure of a computer's technical capabilities, those obviously encounter physical limits at some point. The first time someone invents an AI that can design and build a faster processor for itself, it won't be able to iterate infinitely; eventually it will hit a limitation like heat dissipation, or essential components being only one atom wide. I think computer programs, although they can do creative things, tend to be creative within certain boundaries. Making a creatively unrestricted AI means it spends even more wasted time on useless dead ends, even if it could hypothetically achieve some paradigm shift that most AIs couldn't.
|
# ? Nov 18, 2014 07:53 |
|
AI FOOM Debate, Part 3 I'm going to make a strong effort to finish the rest of the 400 pages in front of me in this post. Consequently I'm going to speed past some points, but I really love talking about this stuff. I finally had a good discussion with my coworkers at Google, so I feel even more empowered to present what I think of as the mainstream opinion among people who actually work on AI. To start us off, I'm going to give a quick summary of the previous 120 pages of debate. Eliezer Yudkowsky believes that once we design an AI which is more intelligent than a human, that intelligence will be able to improve itself faster than we could improve it. "More intelligent" may be an incoherent concept, as SubG has said above me, but might not; the g factor [1] might turn out to be maximizable. Probably not, but it's not definitely incoherent. Eliezer believes, in short, that this will lead to an intelligence explosion, where a slightly-smarter-than-human AI will evolve into a godlike AI in well under 6 months. His preferred timescale is about a week. Hanson believes instead that this will happen very slowly, over the course of years or decades, and that humans will have the chance to intervene if the AI turns out to be evil. If you grant Yudkowsky his belief, then his work with MIRI makes sense; if you grant Hanson's, then his work with the FHI makes sense. Their beliefs are incompatible with each other, and they're using Aumann's Agreement Theorem to (incorrectly) require that one of them change their view. At the end of the last post, we had reached this core disagreement. Because of the way they both talk, it took that long to actually get here, but now things start speeding up. Yudkowsky argues that there are five distinct sources of discontinuity in the world: "Cascades, Cycles, Insight, Recursion, and Magic", in escalating order of importance. Cascades are what happens when one innovation opens up another innovation opens up another innovation. Greatly simplified, it's how you get from unseeing ancient sea creatures to light-sensitive ones to eyeballs. This produces an effective source of discontinuity between the ancient sea creatures and the ones with eyeballs. Then there are cycles. Cycles are what you get when you perform a repeatable transformation, like the example of a pile going critical and producing plutonium. In fact, I'm just going to quote Yudkowsky here. I'm trying to do this less and less because I have 400 pages to get through, but this one is important. Yudkowsky posted:Once upon a time, in a squash court beneath Stagg Field at the University of Chicago, physicists were building a shape like a giant doorknob out of alternate layers of graphite and uranium... A cycle counts as a source of discontinuity because it can lead to things that would otherwise be impossible; there's a critical threshold of "number of iterations caused per iteration", or k. Once a process has k > 1.0 it's going to continue forever. This makes sense. Number 3 is "insight", and Yudkowsky does his famous handwaving to explain that intuition is this magical process that we can harness for the greater good, etc etc etc. Number 4 is "recursion", which is fundamentally exactly the same as "cycles", but with the added bonus that k increases every cycle. Number 5 is literally magic. If I was still playing "mock Yudkowsky" I would put a lot of words here about how this is exactly the same as all Yudkowsky's plans; it is literally "???? Profit", and that any plan which ends with "and then a miracle literally occurs" is complete garbage. But I'm not going to do all that. I actually think Yudkowsky has sold past the close; all he really needed here was point number 2. Cycles. If you can set up an AI which is better than humans at optimizing an AI to be "intelligent", and you have got k > 1.0, you have won, and I no longer feel the need to argue with you. If that can be done, none of this other poo poo matters; if that can be done, we're looking at a hard takeoff. My colleagues at Google came to the same consensus, and we didn't need 200 pages of rambling to do it. If the only thing you take away from my analysis is this, you've pretty much got the gist of it. If you can build an AI which can optimize an AI for "intelligence", and you can achieve k > 1.0, you will probably cause the singularity. Who the gently caress knows what happens next. At this point in the argument, I'm with Yudkowsky, I think he's made his point. If he can convince me that he can do these things, then gently caress it, he wins. Now, those are probably the two biggest "if"s I've ever written down. I don't even thing that "intelligence" is something you can optimize for, even given what I said above about the g factor. I don't think you'll be able to write something with k > 1, either, but honestly I'm willing to grant that! If you can tell me what "intelligence" is, and give me a coherent plan for how you're going to build an AI which can optimize an AI for it, then goddamn it, I will change my mind entirely, full reverse, sit down and write MIRI a check. Okay, I'm going to move back to Hanson. Hanson is in the middle of a debate with some other dude. Carl Shulman is very angry about the idea of whole-brain emulation, and says that once we build emulated minds which run on computers, we'll have created a slave caste, subject to legal murder when they are no longer useful. Hanson chooses to engage, for some reason, but the long and short of it is that Google will probably have no more right to murder its emulated employees than its regular ones. As an aside, why the gently caress is it always Google? We're always the assholes in these wild fantasies, murdering and enslaving emulated people. I promise you right here, no one I work with wants to enslave or murder emulated humans. After he's done with that debate, he moves back to engaging with Yudkowsky, but he does it all wrong. He's committed to his belief that a hard takeoff is impossible, and he's just not right about that. His position is nearly indefensible. I'll try to represent it as fairly as I can but no one I know agrees with him. He argues that Yudkowsky is handwaving too much about this recursion and that all of his "sources of discontinuity" don't represent fast discontinuities, but rather sources of discontinuity over the course of years or decades. We didn't go from blind sea creatures to eyeballs in a day, despite cascades. He argues that Yudkowsky is failing to account for the source of bias which leads people to view history as goal-directed; he's arguing by cherry-picking his examples and ignoring all the other instances where recursion caused nothing much to happen. Yudkowsky comes back and says "but those cases were not recursive enough!" It's not an interesting argument to me, but Yudkowsky draws a distinction between things which are like an optimizing compiler (things which can improve performance, but only once) and things which are like a uranium pile (where improving the pile improves the output indefinitely and sustainably). I'm willing to grant him that, but I would note that all I've agreed to here is that it is conditionally possible that his preferred scenario will occur. I still need to be convinced that there's such a thing as "intelligence", and that there's a way to build an AI to optimize it with k > 1. Even after I've been convinced of those things I still need to know that it's likely. Lots of things are possible, but I don't plan my life around the possibility that I'll be struck by lightning on a cloudless sunny day. Yudkowsky then demonstrates that his knowledge of nanotechnology comes exclusively from science fiction. He tries to take a step back from the FOOM debate (even though he's already made his point) and points to nanotech as something else which is recursive. Nanobots can make more nanobots which can make more nanobots, etc. So a single nanobot factory is enough to take over the entire world, because it can reproduce infinitely. Even if you only make your perfect nanobot factory a week before the researchers on the other side of the world, it's too late, you've won. He's right about this too, in exactly the same way as with his AI FOOM. If you can do a thing that no one agrees is possible, then you will destroy the world. He's made his point but he never seems to notice what else he has to prove before he's done. And Hanson can't seem to call him out on it, because Hanson is still trying to pretend that cycles aren't possible. His next argument is probably his best, which is that there really aren't any purely recursive things in reality. We always need to feed in some input into the system. Nanobots eat more and more resources, maybe AIs eat more and more CPU cycles. Who knows? But it's not fair, he says, to postulate that your particular singularity will be the first ever purely recursive system. He goes back to the concept of total war, again. He seems to believe that the answers to my two important questions ("how do you optimize an AI for optimizing an AI for intelligence" and "how do you make sure k > 1") are "international cooperation and research". He doesn't communicate it very well, but he seems to believe that nobody is going to develop the answers to those questions suddenly and by themselves. Therefore the first strong AI will come from the collaboration dozens of universities and won't be able to suddenly and immediately FOOM; there'll be a slow and steady climb from k = 0.01 to k = 0.99, and when k = 1.0 it won't immediately spike. Improvements or iterations of Yudkowsky's cycle will still take a lot of time, there'll be plenty of time to ensure that the AI is still sound every step of the way. Hanson posted:I have trouble believing humans could find new AI architectures anytime soon that make this much difference. Some other doubts: ... Does a single "smarts" parameter really summarize most of the capability of diverse AIs? Yudkowsky replies with something I find extremely ironic; a giant list of Kurzweil's predictions about the year 2009. Most of them were false then, most of them are true now. The point he's trying to make is "the future is very hard to predict", or "maybe we will find such an architecture", but of course he cannot provide any reason to believe that we will, just make the point that there's no reason to believe that we won't. Gosh, this sounds like the sort of disagreement two rationalists would have if they disagreed about priors. If Yudkowsky had read what me and (was it Cardiovorax? Or SubG? Or su3su2u1? I've forgotten) said about Aumann's Agreement Theorem, then he would have ended the debate here. But when he read the theorem, he missed all the caveats and so he'll continue his argument to the bitter end. There's almost 200 more pages at this point, but I'm really not going to make any more interesting points with it. Yudkowsky's conditionally right if he can demonstrate those two things I've been mentioning over and over, Hanson's right otherwise. Nothing will change this, but for the sake of completeness I'm going to summarize the rest of the arguments. Hanson points to an example of an AI which was able to perform recursive self-improvement, which is an amazing feat of engineering and absolutely brilliant. It's called EURISKO and you might want to go look it up on your own, legitimately impressive. Of course it didn't end the world, and the researcher who made it came to the conclusion that what's really going to matter in AI is data. When he finished with EURISKO, he went and created a system called CYC, which is essentially just a collection of reasoning systems and facts. Hanson loves it because it's actually useful for AI researchers and can answer encyclopedia questions. CYC is actually a lot like Knowledge Graph, which is what Google uses to give you those nifty little answers when you type in a question like "what is the capital of colombia?". Yudkowsky hates CYC with an undying passion, and that made me very angry because my thesis uses a lot of concepts from CYC. We both treat language as a collection of useful identifiers. "Paris" doesn't mean anything, "Capital" doesn't mean anything, "France" doesn't mean anything, but "Paris" "is-capital-of" "France". "Paris" "has-latlng" "48.8567N, 2.3508E". "Paris" "has-population" "2.21 Million". "Paris" "is-type" "City". This is an incredibly powerful model once you add on one simple thing: interaction with the real world. Whatever sensor your robot has, give it a thousand examples of a city, a thousand examples of a person, a model of the globe. Once you've done all that, you have built a knowledge engine. Your robot can be said to "know" things at least as deeply as we do, in that it can recognize novel versions of them and make predictions about them and reason about them. Yudkowsky hates this because he hates the idea that something can be intelligent in only one way. Like SubG said, he really needs to believe that intelligence is a single scalar value, and so he thinks that this model of intelligence is almost blasphemous. I had a professor in school who felt the same way, but it's not the prevailing view. Hanson argues that manufacturing is a bottleneck for the AI, it can't build infinitely many infinitely fast processors instantly, it's going to take tons of time and effort and that's plenty of time to slow the thing down and examine it. Hanson says Yudkowsky seems to be assuming that there's no additional inputs into the system once k > 1.0, and that that's just not true. He uses some economics math that I'm not qualified to evaluate (but he sure is) to demonstrate that production will be increasingly less local as time goes on, so this obstacle will not cease to exist in the future. Hanson does not argue, but should, that at some point hardware will plateau. This is not something that people were sure about several years ago, but to paraphrase my friend (M.S. Electrical Engineering, designs processors for Intel), we're hitting the limits of physics now and Moore's law can't possibly continue unless something really impressive happens. Yudkowsky says that whatever math you can do doesn't matter because the AI is an entirely new type of thing, which no established models can predict. Hanson says that when you're making predictions about an entirely new type of thing, which no established models can predict, you make predictions with your new model in the very near very close future, check them, and then if they're right, see what your model says about the far future. Yudkowsky refuses to make a near-future prediction, citing his preference to remain abstract. Yudkowsky restates his position, and specifies a few negative predictions at Hanson's prompting. Yudkowsky will consider himself wrong if: Yudkowsky posted:1. An ad-hoc self-modifying AI undergoes a cycle of self-improvement, starting from stupidity, that carries it up to the level of a very smart human - and then stops, unable to progress any further. Of course, we can't test a negative prediction; those things haven't happened, but there's no good reason to believe them to be less likely that Yudkowsky's preferred scenario. Hanson says that in addition to all that he's said, economics (which, again, I am not qualified to evaluate) says that hockey-stick plans (referring to the shape of a graph, specifically ones with huge sudden sustained spikes in growth rate) are rare and you should never plan on them. I wish that Hanson was advising venture capitalists. Yudkowsky says that sustained strong recursion just is like that and you can't use existing models to model it because it's fundamentally different. Hanson says that "friendliness", as Yudkowsky defines it, is exactly the sort of thing that economics is qualified to talk about, it's just game theory plus some other stuff. I think that Hanson is pulling an XKCD 793. (yes, I know XKCD is garfield for grad students, I was a grad student, shut up) Yudkowsky addresses a concern that no one has raised, but that someone should have raised. I'll devote some time to this one: He says that sometimes people hear his ideas and reject them, and the reason they give is "you don't have a PhD!". Personally I believe this; it seems that in order to have weird beliefs you are supposed to have a PhD. He gives the example of Eric Drexler, who he really likes, who did basically the same thing as Yudkowsky except with molecular nanotechnology. People rejected his ideas because he was just some autodidact kid, so he went and studied hard and got a PhD, and then wrote what I understand to be The Book on molecular nanotechnology. Awesome. Yudkowsky is unsatisfied with this because he doesn't think that people listen to Drexler any more than they used to. He's wrong, but I think I understand why. Here's what he says: "But did the same people who said, 'Come back when you have a PhD', actually change their minds at all about molecular nanotechnology? Not so far as I ever heard." No, Yudkowsky. You wouldn't have heard, that's not what happens when academia changes its mind. Instead, you'd just see a ton of citations of the book Nanosystems. Google Scholar says it has ~1700, which to me says "mainstream acceptance". Woo! But nobody came bowing and scraping to Drexler, saying "you were right all along, we are fools, please forgive us oh mighty one", like in Yudkowsky's academia fan fiction, that Bayes cult he's depicted a few times, which appears to argue that scientific discoveries should all take mere weeks and that the highest virtue is the ability to do arithmetic in your head and be confident in it. Yudkowsky won't go get a PhD, because he doesn't believe that I'd agree with him once he did. And he's right about that! I wouldn't, but getting a PhD would give him the common background with me to be able to talk in the right way that I can understand his ideas, follow his logic, and then decide whether I agree or not. And it would give him the common ground to understand my response, and we could have a dialog. That's why academically minded people tell him to go get a PhD, not just because they want to dismiss him (though, of course, that's part of it), but because if he did go get one, then maybe we'd be able to talk. Hanson, of course, already has a PhD, and is a professor. I assume he doesn't respond out of politeness. Hanson argues that AIs, while they're developing in their slow advancement towards greater and greater intelligence, will share information. He says that it's obviously preferable to share insight rather than to keep it all secret, unless you assume that your fellow researchers / AIs won't do the same. Again, as an economist he feels qualified to talk about this; I'm not qualified to judge it. Yudkowsky doesn't really respond to this anymore, instead he says that he thinks that emulating human brains, like Hanson proposes, is a cop-out. He says that nothing counts unless we as humans understand intelligence and build it into a system. A system which evolves intelligence as an emergent phenomenon is bullshit, and we're wasting computing cycles trying to make it. I agree with him, actually, although Google Brain and various other deep learning systems seem bent on disproving us. There was a recent-ish development that makes me feel right, but again, people are divided on this. Read details here if you're interested. Hanson notices that Yudkowsky is not really debating him anymore, and decides to finish the whole thing up with a few more posts. They've agreed to disagree after all. Gosh, if only they knew about the agreement theorem. I'm going to quote what I consider his concluding arguments: Hanson posted:So I suspect this all comes down to how powerful is architecture in AI, and how many architectural insights can be found how quickly? If there were say a series of twenty deep powerful insights, each of which made a system twice as effective, just enough extra oomph to let the project and system find the next insight, it would add up to a factor of a million. Which would still be nowhere near enough, so imagine a lot more of them, or lots more powerful. Basically, "Yudkowsky wants to believe this, and I get why he wants to; it makes him very important. But nobody else agrees with him, and his math is unconvincing, and he can't seem to make a persuasive argument for why it's true." Yudkowsky, of course, has no real response. His closing argument: Yudkowsky posted:That's the point where I, having spent my career trying to look inside the black box, trying to wrap my tiny brain around the rest of mind design space that isn't like our small region of temperate weather, just can't make myself believe that the Robin-world is really truly actually the way the future will be. That's Yudkowsky for "man, I really just want to believe this, and you'll have to take my word for it because I've thought about it a lot." That's all there is on the AI FOOM debate. Let me know if some of this makes no goddamn sense, or if you read it and you want to talk about something; I'd be happy to expand on anything I've written here. Hell, if you know what I'm talking about better than I do, correct me. Thanks for reading, I know this was a hell of a lot of text.
|
# ? Nov 18, 2014 08:18 |
|
Wow, that is an awesome - and intimidating - effort post. SolTerrasa posted:But nobody came bowing and scraping to Drexler, saying "you were right all along, we are fools, please forgive us oh mighty one", like in Yudkowsky's academia fan fiction, that Bayes cult he's depicted a few times, which appears to argue that scientific discoveries should all take mere weeks and that the highest virtue is the ability to do arithmetic in your head and be confident in it. Yudkowsky won't go get a PhD, because he doesn't believe that I'd agree with him once he did. And he's right about that! I wouldn't, but getting a PhD would give him the common background with me to be able to talk in the right way that I can understand his ideas, follow his logic, and then decide whether I agree or not. And it would give him the common ground to understand my response, and we could have a dialog. That's why academically minded people tell him to go get a PhD, not just because they want to dismiss him (though, of course, that's part of it), but because if he did go get one, then maybe we'd be able to talk. Yud, and by extension most LW people, seem deeply confused about how academia and actual research work. I'm reminded of his plan to go form a "math monastery" in South America, because of how insane the STEM funding environment in the US is (proof of the broken-clocks theorem if I ever saw it). SolTerrasa posted:Instead, you'd just see a ton of citations of the book Nanosystems. Google Scholar says it has ~1700, which to me says "mainstream acceptance". Is 1700 citations really that much for a book that was published in 1992? Juan Maldacena's original ADS/CFT paper, which basically invented a new joint subfield of condensed matter theory and high energy theory in physics, has over 10,000 citations. This "create a whole new field of molecular nanotech" thing Drexler was attempting to do sounds exactly like what Maldacena actually did, and yet Maldacena has been 10 times as successful even though Drexler had an 8 year head start. Or am I looking at this wrong, and molecular nanotech is a more niche area than theoretical physics crossover? If it is sensible to compare these two things it doesn't surprise me that Yudkowsky thinks Drexler was ignored, because it seems like he sort of has been.
|
# ? Nov 18, 2014 08:45 |
|
SolTerrasa posted:I'm going to make a strong effort to finish the rest of the 400 pages in front of me in this post. Thank you for that. It's interesting to hear about this stuff from someone who knows what they're talking about and is willing to spend time writing it down in a form non-specialists can understand. SolTerrasa posted:As an aside, why the gently caress is it always Google? We're always the assholes in these wild fantasies, murdering and enslaving emulated people. I promise you right here, no one I work with wants to enslave or murder emulated humans. Justine Tunney has a lot to answer for.
|
# ? Nov 18, 2014 09:25 |
|
SolTerrasa posted:If the only thing you take away from my analysis is this, you've pretty much got the gist of it. If you can build an AI which can optimize an AI for "intelligence", and you can achieve k > 1.0, you will probably cause the singularity. You're also implying not only that an AI will be able to build a better AI more or less for free, but also that this is something which can be iterated an arbitrary number of times. I mean I guess it's silly to start worrying about practical considerations after we've just hand-waved away all objections to the fact that we don't have a good model for intelligence, that even if we had a good model that wouldn't imply that we could build one, even if we could build one that doesn't imply that it would have a knob for adjusting the level of intelligence, and even if we could adjust the intelligence that doesn't mean that it would necessarily be `intelligent' in all conceivable ways (so that it would just know how to do whatever it was it needed to do to self-improve, whatever field of knowledge that required), and even if it was `intelligent' in whatever arbitrary ways it needed to be to self-improve that doesn't imply that it would. And after stipulating all of this, stipulating that the hardware costs associated with squeezing out a series of exponentially better platforms in an exponentially shorter period of time will actually be lower than, say, re-tooling a line for Pentium n+1, and further stipulating that we can just do this until our really, really smart self-improving computer has a literally infinite INT stat on its character sheet, all that's just rounding error. And presumably our first AI won't have thought of all of this, or if it does it's surprisingly copacetic with the thought of becoming instantly obsolete. Indeed, it will be sanguine about the idea that we conjured it into existence entirely so that it would make itself immediately obsolete for us. Because in the future of the singularity, nobody would ever think of just recycling a computer that's literally infinitely less capable than what you can currently order online at that moment. Because in the future of the singularity, recycling is murder.
|
# ? Nov 18, 2014 13:03 |
|
bartlebyshop posted:Is 1700 citations really that much for a book that was published in 1992? Juan Maldacena's original ADS/CFT paper, which basically invented a new joint subfield of condensed matter theory and high energy theory in physics, has over 10,000 citations. This "create a whole new field of molecular nanotech" thing Drexler was attempting to do sounds exactly like what Maldacena actually did, and yet Maldacena has been 10 times as successful even though Drexler had an 8 year head start. Or am I looking at this wrong, and molecular nanotech is a more niche area than theoretical physics crossover? If it is sensible to compare these two things it doesn't surprise me that Yudkowsky thinks Drexler was ignored, because it seems like he sort of has been. Yeah, he basically has been. It's no real surprise that Yud (and Kurzweil and others) are drawn to Drexler, because he's an unquestionably brilliant man, but a lot of people with experience on the scales he's talking about thinks he's absolutely nuts. His idea consisted of "make a robot that makes a smaller robot that makes... until you get to a molecular assembler," which runs into a bunch of hard physical limits just like Yud's AI. He had a fairly highly publicized The basic idea was that the molecular assembler is a nanoscale equivalent of a factory robot, which can pick up atoms and put them into the proper place in the nanomachine you're building. The first big problem here is called the "fat fingers" problem: how do you pick up one atom without it dragging more along? Atoms bond to one another, and the things you'd build a machine out of (carbon) form exceptionally strong bonds to one another. Assuming that you've bypassed that problem, congratulations! You've successfully broken the bonds and pulled that atom free! However, chemistry dictates that if you managed to break all of those bonds, it's because whatever you've picked it up with formed an even stronger bond (unless it happened for entropic reasons, but saying "a crapload of disorder results" is not exactly a strong argument for a precision robot). So how do you break it when you want to drop the atom off? This is called the "sticky fingers" argument. Drexler notably didn't want to talk about those issues, and just wanted to focus on atomic scale phenomena from a "mechanical" perspective (ie: legos).
|
# ? Nov 18, 2014 14:55 |
|
Eliezer Yudkowsky's facebook page posted:This is why most taxes should be on consumption (value-added tax, luxury tax) and fixed resources (land value tax); while capital gains taxes and corporate income taxes and income tax should all be zero: To sum up his argument, capital gains and income taxes shouldn't exist because the wealthy and corporations don't own real things with expenses associated with them and thus all that money goes back into the economy so we should encourage this. Rather, all taxes should be based on consumption taxes (like sales tax/VAT/GST/whatever your local equivalent is) or luxury purchase tax. Like, just reading it I can see the initial problem here: correct me if I'm wrong, but as a general rule governments need a lot of money to provide even the most basic services and you've got to take it from somewhere, and its significantly better to take it from people with a lot of money/assets such as, say, individuals and organisations with significant amounts of capital rather than the poor, because those with capital can still buy food at the end of the day whereas those without can't. Sure, you can shift your taxation off onto luxury goods like Yudkowsky suggests, but then those who would otherwise be buying mansions won't buy those luxuries anymore (or at least, buy them in smaller amounts) and suddenly your tax income has plummetted and you can't pay for basic services. Big Yud acknowledges that people might be less inclined to spend money on luxuries, because of increased luxury tax, but he completely fails to acknowledge that that means that your prime source of tax income under his model has dissappeared. But thats okay, because something something job creators. I dunno, I don't have a particularly deep economics background so I might be off, but this strikes me as essentially the job creators argument. Except it doesn't make sense because he assumes that by buying physical goods you actually take the money out of the economy and are basically hoarding it. But you're not because even say, a mansion, requires upkeep, maintenance, etc. Even in his model, by hoarding the gold the dragon is creating jobs because it needs guards, etc., so it is still "giving back". Can somebody back me up here/tell me why he's full of poo poo/tell me why he's right? Because it seems to me he should drop economics and stick to what he does best: nothing.
|
# ? Nov 18, 2014 16:07 |
|
It's time to reject the reflexive kowtowing to the supposed genius of people like drexler, kurtzweil and yud. They are cranks who get famous by telling the rich and nerds that they are special and that their worldview is correct.
|
# ? Nov 18, 2014 16:21 |
|
Let's see how SlateStarCodex is doing!SlateStarCodex posted:the dysgenic effect long believed to exist from poor people having more children has stabilized and may be reversing, at least among whites. Actually let's not. Continue yudmocking
|
# ? Nov 18, 2014 16:21 |
|
|
# ? Jun 8, 2024 20:51 |
|
Consumption taxes are a terrible idea if you're trying for basic Keynesian economic policy or trying to tax the people who can actually afford it. To start, his whole idea that rich people spend their extra money is completely goddamned wrong. This is related to Marginal Propensity to Consume, which is essentially "if I gave you a dollar, how much of it would you spend and how much would you save?" MPC drops as income rises, so the poor will spend nearly all of a windfall on things they've been putting off due to lack of money (car inspections, doctor's appointments), while richer people will sock some away for retirement/a rainy day/etc. Also, since MPC is higher for the poor, consumption taxes disproportionally hit the poor pretty much by definition, so you're trying to squeeze blood from a stone while the rich go pretty much untouched. Also, if you're trying to maximize the amount of money circulating, it should go without saying that you want to encourage spending and discourage savings. So why would taxing people on purchases encourage them to purchase more? Also also, the last thing you want for a stable economic policy is a tax base that collapses the moment a recession hits and consumption stops.
|
# ? Nov 18, 2014 16:31 |