Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Zoq-Fot-Pik
Jun 27, 2008

Frungy!

Djeser posted:

on a scale of yudkowsky to nite crew, how ironic is this anime we're talking about

i know gentlegoons always temper their anime with irony

Adbot
ADBOT LOVES YOU

Djeser
Mar 22, 2013


it's crow time again

Srice posted:

nite crew is not ironic about enjoying anime, friend

but nite crew posts in lower case, the puncutation of irony

Srice
Sep 11, 2011

Djeser posted:

but nite crew posts in lower case, the puncutation of irony

in this post-post-irony world i can't even tell who's being ironic here

Djeser
Mar 22, 2013


it's crow time again

irony, but ironically

Triskelli
Sep 27, 2011

I AM A SKELETON
WITH VERY HIGH
STANDARDS


Anime is very serious business



Also, don't let Yudkowsky out of the box

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
Wow how did this thread get so good all of a sudden?

Seshoho Cian
Jul 26, 2010

Pavlov posted:

Wow how did this thread get so good all of a sudden?

Squidster
Oct 7, 2008

✋😢Life's just better with Ominous Gloves🤗🧤
Once MIRI reaches it's kickstarter stretch goal, they will definitely build a singularity waifu for each and everyone of us.



HMS Boromir
Jul 16, 2011

by Lowtax

Don Gato
Apr 28, 2013

Actually a bipedal cat.
Grimey Drawer
So is Ghost in the Shell Yudkowsky's ideal end state or his nightmare world?

The Lord of Hats
Aug 22, 2010

Hello, yes! Is being very good day for posting, no?
It's been a while since I've seen it, but it seems like it'd be closer to the ideal end state, given the lack of 'true' AI in the setting (that'd just be the Tachikomas, and even that by accident, right?). I can't imagine he'd be too upset with a setting where technoimmortality is a thing, but then again I don't really know the inscrutable ways of the Yudkowsky yet.

The Vosgian Beast
Aug 13, 2011

Business is slow
LW-ite argues that sexism is literally empiricism

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
My sexism and prejudices are justified by virtue of "where there's smoke there's fire."
--Master Rationalist

Epitope
Nov 27, 2006

Grimey Drawer

quote:

I find this whole Dark Enlightenment/Neoreaction/Neopatriarchy development fascinating because it shows the failure of the progressive project to control the human mind. In the U.S., at least, progressives have a lot of control in centrally planning the culture towards Enlightenment notions of democracy, feminism, egalitarianism, cosmopolitanism, tolerance, etc. Yet thanks to the internet, men who previously wouldn't have had the means to communicate with each other have organized in a Hayekian fashion to discover that they have had similar damaging experiences with, for example, women in a feminist regime, and they have come to similar politically incorrect conclusions about women's nature. And this has happened despite the policies and preferences of the people who hold the high ground in education, academia, law, government and the entertainment industry.

I can see why the emergence of secular sexism drives progressive nuts, because they wrongly believed that sexism depended on certain kinds of god beliefs that have fallen into decline, as this AlterNet article explores. Uh, no, why would anyone have ever thought that? We can't observe gods, but women exist empirically, and men have had to live with them all along. If the resulting body of experiences with women have condensed into a patriarchal tradition which puts women in a bad light - well, you can't blame that on theology, now, can you?

Haha, this is the essence of what somethingawful was built on mocking. People with horrible views being able to gather online and reenforce each others' bs. Doubly so in this case; since they're acutely aware of the phenomenon, but take it as a good thing.

Luigi's Discount Porn Bin
Jul 19, 2000


Oven Wrangler

Epitope posted:

I find this whole Dark Enlightenment/Neoreaction/Neopatriarchy development fascinating because it shows the failure of the progressive project to control the human mind.
Welp. Thought we were on track lately, what with the whole marriage equality business, but I've just been informed that some self-aggrandizing Bay Area screwballs have been writing long-winded blog entries about monarchism. Pack it in, boys.

Polybius91
Jun 4, 2012

Cobrastan is not a real country.
I got into an irritating and exhausing argument with a friend about AI not too long ago, and after it was over, I thought of this thread. Most of his points were faulty right out the box, but he said some stuff that I legitimately didn't know how to respond to, even though I have a Bachelor's in CS. Can anyone smarter and more experienced than I am take a crack at these (roughly paraphrased) arguments or point me to useful reading on the subject?

-"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process."

-"We don't know what causes consciousness/sentience, and computers are similar to human brains in that they operate on binary input/output seqeuences, so there's no reason to think that some combination of input couldn't cause them to become conscious."

-"Through tools like fuzzy logic, computers are much better at handling abstract concepts than they used to be, which means they might develop concepts suchs as slavery and freedom."

-"Hardware limitations are becoming less relevant because networked computers can use the resources of others, even without their permission (as with botnets)."

SubG
Aug 19, 2004

It's a hard world for little things.

Polybius91 posted:

-"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process."
Anonymous functions are a bit of a red herring here; they aren't prerequisite for self-modifying code.

And I guess he's claiming that human `rewrite their own code on the fly'? I'm not sure I even know what that means. Like there's a vague notion that you can sit down and if you don't know, I dunno, algebra you could learn it. But apart from just sorta insinuating that this process is similar to, or perhaps (more strongly) identical to, tacking a subroutine onto a running programme I really don't see the argument for this model.

Put more strongly: I don't think humans learn by adding subroutines.

Polybius91 posted:

-"We don't know what causes consciousness/sentience, and computers are similar to human brains in that they operate on binary input/output seqeuences, so there's no reason to think that some combination of input couldn't cause them to become conscious."
Human brains `operate on binary input/output sequences'?

But I'm not sure I understand this line of argument. Is he arguing that any arbitrary mechanism that accepts input might, given the right input, spontaneously develop sentience independent of architecture? Because that's fairly astonishing. It implies that, for example, your garage door opener might suddenly become self-aware if you just manipulated the remote in the proper way.

I think it also implies that any other creature that `operates on binary input/output sequences' similarly to the way humans do would become intelligent if exposed to the right input. So a very thoughtful conversation with a stoat, say, or a trout would produce intelligence.

Polybius91 posted:

-"Through tools like fuzzy logic, computers are much better at handling abstract concepts than they used to be, which means they might develop concepts suchs as slavery and freedom."
Arguments of the general form `using [foo], computers are much better at [bar]' is just the conclusion masquerading as an argument. `Better' how? Why should we think that [foo] leads to [bar]? That's the argument. Because if we knew the answer to it, we could just implement [bar] directly, and we wouldn't have to sit here arguing that [foo] is going to lead to it at some point in the indistinct future.

Polybius91 posted:

-"Hardware limitations are becoming less relevant because networked computers can use the resources of others, even without their permission (as with botnets)."
Every year there are more cars in the world, so at some point we'll be able to drive at the speed of light.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Polybius91 posted:

-"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process."
This is not a unique feature of Lisp, but neither would it matter if it was. An anonymous function is a completely regular function without an identifier in the local namespace. They don't allow a computer to rewrite its code any more than it already can.

Polybius91 posted:

-"We don't know what causes consciousness/sentience, and computers are similar to human brains in that they operate on binary input/output seqeuences, so there's no reason to think that some combination of input couldn't cause them to become conscious."
Half right. "Substance agnosticism" is the idea that there is nothing physically unique about the brain that can't theoretically be imitated by a digital computer. The brain isn't digital, though, it's a mixed paradigm of analog and digital signal processing methods. So it might be true, but not for the reason he gives.

Polybius91 posted:

-"Through tools like fuzzy logic, computers are much better at handling abstract concepts than they used to be, which means they might develop concepts suchs as slavery and freedom."
That is so stupid, it isn't even wrong.

Polybius91 posted:

-"Hardware limitations are becoming less relevant because networked computers can use the resources of others, even without their permission (as with botnets)."
Latency issues and the inherent restrictions of parallelization substantially limit how much networking can replace pure processing power. Some important algorithms need to be worked off in a strict order. You can't parallelize when doing so requires you to know the future. Also, physical distance increases response times, which may, under the bottom line, be less efficient. Botnets work because they all bots independently do the same thing and need no outside input. That doesn't make them supercomputers.

Polybius91
Jun 4, 2012

Cobrastan is not a real country.
Thanks for the responses. In retrospect, I already knew some of those things on some level and should've considered them, but they slipped my mind (or I had a hard time putting them into words) in the heat of an argument.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
Tell him that tools like fuzzy logic are what make argumentation like that possible.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
Addendum, wrt anonymous functions and self-rewriting code. This is pretty much an expansion of SubG's explanation featuring code samples. He probably meant first-class functions or macroing, by the way.

A lot of languages including Lisp have a facility to substitute code into other code. (Lisp has more than one, in fact) In Haskell, for instance, you can write this:

code:
threeTimes x = do {x; x; x; }

main = do
  putStrLn "Hey!"
  putStrLn "Hey!"
  putStrLn "Hey!"

main' = do
  threeTimes (putStrLn "Hey!")
(main and main' generate the same output)

You can also write code like this (this is actually much easier to write in Lisp, but I'm more familiar with Haskell and trust myself more that I won't screw it up):
code:
red x = ???
green x = ???

-- substitutes red commands for green commands, by breaking the code block into greens and reds
-- this is more complicated than you want to get into right now and probably takes about twenty to thirty lines of code (less with libraries)
makeReds code = ???

main = runStreetLight $ do
  red "stop!"
  red "stop!"
  red "stop!"

main' = runStreetLight $
  makeReds $ do
    green "stop!"
    red "stop!"
    green "stop!"
(main and main' generate the same output)

So while it's technically true that, via substitution, you're writing self-modifying code, it doesn't say that the process to modify the code does anything useful or clever. (similar to SubG's comments). The intelligence is still somewhere else. We wouldn't say that makeReds is any smarter than find/replace -- in fact, since both Lisp and Haskell do this by representing the input code block as a list or nested list of instructions the same way you might represent text as a list of characters, it is literally find and replace.

SubG
Aug 19, 2004

It's a hard world for little things.

Krotera posted:

A lot of languages including Lisp have a facility to substitute code into other code. (Lisp has more than one, in fact) In Haskell, for instance, you can write this:
And it's worth noting that the idea that permitting self-modification is some kind of advanced feature of advanced languages for doing advanced poo poo is more or less precisely backwards. Twiddling around with instructions in memory is like the number one stupid assembly trick. And is something that literally every 8 bit micro could do in BASIC with a bunch of judicious (or injudicious) POKE statements. Implementing protection against doing this accidentally is, historically, the advancement. And getting around those protections is, historically, one of the largest vectors for computer security compromises (buffer overflows).

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

SubG posted:

And it's worth noting that the idea that permitting self-modification is some kind of advanced feature of advanced languages for doing advanced poo poo is more or less precisely backwards. Twiddling around with instructions in memory is like the number one stupid assembly trick. And is something that literally every 8 bit micro could do in BASIC with a bunch of judicious (or injudicious) POKE statements. Implementing protection against doing this accidentally is, historically, the advancement. And getting around those protections is, historically, one of the largest vectors for computer security compromises (buffer overflows).

Yeah: I'm overall sympathetic to the feature -- I think substituting code into other code is very useful and usually not too dangerous -- but it's generally very hard to make it safe and Lisp's take isn't particularly strong that way. It's a default condition and making it usable requires more than just restoring how it works by default. The semantics surrounding self-modification in Lisp or especially Haskell are completely different from the kind of semantics surrounding it in native code doing nothing funny.

(You could, grossly abusing the word "creative," say that i.e. C allows for much more creative self-modification than Lisp does!)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Polybius91 posted:

I got into an irritating and exhausing argument with a friend about AI not too long ago, and after it was over, I thought of this thread. Most of his points were faulty right out the box, but he said some stuff that I legitimately didn't know how to respond to, even though I have a Bachelor's in CS. Can anyone smarter and more experienced than I am take a crack at these (roughly paraphrased) arguments or point me to useful reading on the subject?

-"Programming languages such as Lisp, through means such as anonymous functions, allow computers to rewrite their own code on the fly. This allows a computer to essentially change its own thought process."

I wanted to throw my 2 cents in on this one. Your friend isn't even identifying the right old technology. Storing programs in the same memory as data (von Neumann architecture, dates from the 1940s) is what opens up the possibility of programs modifying themselves or other programs, not LISP or any other specific language.

There's no doubt this is a useful tool to have in the box, and not just for AI: you need it to do dynamic linking, JIT compilers, and many other things. However, we haven't got the foggiest idea how to implement humanlike cognitive or learning processes on top of it. Similarly, though we understand some things about neurons (biology's self-malleable building blocks), we don't know much about how they work together as the substrate of human intelligence. There is simply a giant gap in knowledge here.

Put another way, taking credit for self-modifying code as a major AI milestone is a bit like taking credit for figuring out how to burn tree branches as a major step towards modern industrial civilization. Significant, essential? Sure, you can always say that about baby steps. But they're still just baby steps.

SolTerrasa
Sep 2, 2011


All of what's been said is true. In addition, one essential response to anyone who suggests pretty much anything about singularities is that we have no idea how to implement humanlike cognition at all. We have no reason to believe that doing so will be a "one weird trick" sort of thing, and a lot of reasons to believe that it won't. There's a lot of subfields of AI, and almost all of them have hard unsolved problems; we're not going to magically discover it all sometime soon. There's too many pieces that need to come together.

Slime
Jan 3, 2007

BobHoward posted:

I wanted to throw my 2 cents in on this one. Your friend isn't even identifying the right old technology. Storing programs in the same memory as data (von Neumann architecture, dates from the 1940s) is what opens up the possibility of programs modifying themselves or other programs, not LISP or any other specific language.

There's no doubt this is a useful tool to have in the box, and not just for AI: you need it to do dynamic linking, JIT compilers, and many other things. However, we haven't got the foggiest idea how to implement humanlike cognitive or learning processes on top of it. Similarly, though we understand some things about neurons (biology's self-malleable building blocks), we don't know much about how they work together as the substrate of human intelligence. There is simply a giant gap in knowledge here.

Put another way, taking credit for self-modifying code as a major AI milestone is a bit like taking credit for figuring out how to burn tree branches as a major step towards modern industrial civilization. Significant, essential? Sure, you can always say that about baby steps. But they're still just baby steps.

I'd say self modifying code is just the tool a hypothetical strong AI would use rather than what actually gets the job done. Imagine if you knew how to modify your brain, change the way you think at will. That ability would be useless to you without the means to identify what would be a meaningful, useful change. Why are people assuming that just because it's an AI it knows how to code, or really understands how its own hardware works? We barely know anything about our own brains and we've had them for thousands of years.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Slime posted:

I'd say self modifying code is just the tool a hypothetical strong AI would use rather than what actually gets the job done. Imagine if you knew how to modify your brain, change the way you think at will. That ability would be useless to you without the means to identify what would be a meaningful, useful change. Why are people assuming that just because it's an AI it knows how to code, or really understands how its own hardware works? We barely know anything about our own brains and we've had them for thousands of years.
An AI based on modern computer technology is somewhat constitutionally different from a human brain. We don't even really have access to the majority of our own mind, while an AI would be able to read every single bit that composes it, by design, unless artificially constrained. As to how this makes it able to improve on itself? Well, the logic goes "humans design AI smarter than themselves -> AI designs AI even smarter than itself -> smarter AI designs yet smarter AI -> repeat."

It's a bit simplistic, but at least it isn't categorically nonsense, like some of the stuff these people come up with. It just relies on the premise that you can intentionally design an intellect that's greater than your own, which is neither provably right nor wrong.

The Vosgian Beast
Aug 13, 2011

Business is slow
Who wants some more nuance in their lives?

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'


quote:

The rationality movement isn’t about epistemology.
Everything is actually about signalling.

Literally "we don't actually care about studying knowledge or anything like that, we just pretend we do because we think it makes us look cool."

SubG
Aug 19, 2004

It's a hard world for little things.

Cardiovorax posted:

It's a bit simplistic, but at least it isn't categorically nonsense, like some of the stuff these people come up with. It just relies on the premise that you can intentionally design an intellect that's greater than your own, which is neither provably right nor wrong.
It's not even provably coherent. Proponents of `strong AI' (or however you want to say it) invariably believe that intelligence is more or less equivalent to the INT stat in D&D...a single capacity or capability which can be fully described by a single scalar quantity. If human intelligence is actually something that can only be fully described by multiple variables then there's no guarantee that there's a way to unambiguously define an overall maxima, much less optimise for it.

And of course there's the possibility that the whole concept of intelligence is so broadly defined that it has no specific referent, so we can faff around concocting definitions that help whatever argument we wish to make but little more. If some researcher was really, really into dogs and announced that he was on the verge of creating the first transdogian entity because he could make a dog that's more doggy than dogness...well, you know maybe he has knocked together some special-use definition of `dog', `dogness', and so on where this makes some sort of sense. But that doesn't mean that it's not all just hollow wankery that doesn't have much to do with anything else.

Chamale
Jul 11, 2010

I'm helping!



If it's purely a measure of a computer's technical capabilities, those obviously encounter physical limits at some point. The first time someone invents an AI that can design and build a faster processor for itself, it won't be able to iterate infinitely; eventually it will hit a limitation like heat dissipation, or essential components being only one atom wide. I think computer programs, although they can do creative things, tend to be creative within certain boundaries. Making a creatively unrestricted AI means it spends even more wasted time on useless dead ends, even if it could hypothetically achieve some paradigm shift that most AIs couldn't.

SolTerrasa
Sep 2, 2011


AI FOOM Debate, Part 3

I'm going to make a strong effort to finish the rest of the 400 pages in front of me in this post. Consequently I'm going to speed past some points, but I really love talking about this stuff. I finally had a good discussion with my coworkers at Google, so I feel even more empowered to present what I think of as the mainstream opinion among people who actually work on AI.

To start us off, I'm going to give a quick summary of the previous 120 pages of debate. Eliezer Yudkowsky believes that once we design an AI which is more intelligent than a human, that intelligence will be able to improve itself faster than we could improve it. "More intelligent" may be an incoherent concept, as SubG has said above me, but might not; the g factor [1] might turn out to be maximizable. Probably not, but it's not definitely incoherent. Eliezer believes, in short, that this will lead to an intelligence explosion, where a slightly-smarter-than-human AI will evolve into a godlike AI in well under 6 months. His preferred timescale is about a week. Hanson believes instead that this will happen very slowly, over the course of years or decades, and that humans will have the chance to intervene if the AI turns out to be evil. If you grant Yudkowsky his belief, then his work with MIRI makes sense; if you grant Hanson's, then his work with the FHI makes sense. Their beliefs are incompatible with each other, and they're using Aumann's Agreement Theorem to (incorrectly) require that one of them change their view.

At the end of the last post, we had reached this core disagreement. Because of the way they both talk, it took that long to actually get here, but now things start speeding up. Yudkowsky argues that there are five distinct sources of discontinuity in the world: "Cascades, Cycles, Insight, Recursion, and Magic", in escalating order of importance. Cascades are what happens when one innovation opens up another innovation opens up another innovation. Greatly simplified, it's how you get from unseeing ancient sea creatures to light-sensitive ones to eyeballs. This produces an effective source of discontinuity between the ancient sea creatures and the ones with eyeballs. Then there are cycles. Cycles are what you get when you perform a repeatable transformation, like the example of a pile going critical and producing plutonium. In fact, I'm just going to quote Yudkowsky here. I'm trying to do this less and less because I have 400 pages to get through, but this one is important.

Yudkowsky posted:

Once upon a time, in a squash court beneath Stagg Field at the University of Chicago, physicists were building a shape like a giant doorknob out of alternate layers of graphite and uranium...

The key number for the "pile" is the effective neutron multiplication factor. When a uranium atom splits, it releases neutrons - some right away, some after delay while byproducts decay further. Some neutrons escape the pile, some neutrons strike another uranium atom and cause an additional fission. The effective neutron multiplication factor, denoted k, is the average number of neutrons from a single fissioning uranium atom that cause another fission. At k less than 1, the pile is "subcritical". At k >= 1, the pile is "critical". Fermi calculates that the pile will reach k=1 between layers 56 and 57.

On December 2nd in 1942, with layer 57 completed, Fermi orders the final experiment to begin. All but one of the control rods (strips of wood covered with neutron-absorbing cadmium foil) are withdrawn. At 10:37am, Fermi orders the final control rod withdrawn about half-way out. The geiger counters click faster, and a graph pen moves upward. "This is not it," says Fermi, "the trace will go to this point and level off," indicating a spot on the graph. In a few minutes the graph pen comes to the indicated point, and does not go above it. Seven minutes later, Fermi orders the rod pulled out another foot. Again the radiation rises, then levels off. The rod is pulled out another six inches, then another, then another.

At 11:30, the slow rise of the graph pen is punctuated by an enormous CRASH - an emergency control rod, triggered by an ionization chamber, activates and shuts down the pile, which is still short of criticality.

Fermi orders the team to break for lunch.

At 2pm the team reconvenes, withdraws and locks the emergency control rod, and moves the control rod to its last setting. Fermi makes some measurements and calculations, then again begins the process of withdrawing the rod in slow increments. At 3:25pm, Fermi orders the rod withdrawn another twelve inches. "This is going to do it," Fermi says. "Now it will become self-sustaining. The trace will climb and continue to climb. It will not level off."

Herbert Anderson recounted (as told in Rhodes's The Making of the Atomic Bomb):

"At first you could hear the sound of the neutron counter, clickety-clack, clickety-clack. Then the clicks came more and more rapidly, and after a while they began to merge into a roar; the counter couldn't follow anymore. That was the moment to switch to the chart recorder. But when the switch was made, everyone watched in the sudden silence the mounting deflection of the recorder's pen. It was an awesome silence. Everyone realized the significance of that switch; we were in the high intensity regime and the counters were unable to cope with the situation anymore. Again and again, the scale of the recorder had to be changed to accomodate the neutron intensity which was increasing more and more rapidly. Suddenly Fermi raised his hand. 'The pile has gone critical,' he announced. No one present had any doubt about it."

Fermi kept the pile running for twenty-eight minutes, with the neutron intensity doubling every two minutes.

That first critical reaction had k of 1.0006.

A cycle counts as a source of discontinuity because it can lead to things that would otherwise be impossible; there's a critical threshold of "number of iterations caused per iteration", or k. Once a process has k > 1.0 it's going to continue forever. This makes sense. Number 3 is "insight", and Yudkowsky does his famous handwaving to explain that intuition is this magical process that we can harness for the greater good, etc etc etc. Number 4 is "recursion", which is fundamentally exactly the same as "cycles", but with the added bonus that k increases every cycle. Number 5 is literally magic. If I was still playing "mock Yudkowsky" I would put a lot of words here about how this is exactly the same as all Yudkowsky's plans; it is literally "???? Profit", and that any plan which ends with "and then a miracle literally occurs" is complete garbage. But I'm not going to do all that. I actually think Yudkowsky has sold past the close; all he really needed here was point number 2. Cycles. If you can set up an AI which is better than humans at optimizing an AI to be "intelligent", and you have got k > 1.0, you have won, and I no longer feel the need to argue with you. If that can be done, none of this other poo poo matters; if that can be done, we're looking at a hard takeoff. My colleagues at Google came to the same consensus, and we didn't need 200 pages of rambling to do it. If the only thing you take away from my analysis is this, you've pretty much got the gist of it. If you can build an AI which can optimize an AI for "intelligence", and you can achieve k > 1.0, you will probably cause the singularity. Who the gently caress knows what happens next. At this point in the argument, I'm with Yudkowsky, I think he's made his point. If he can convince me that he can do these things, then gently caress it, he wins.

Now, those are probably the two biggest "if"s I've ever written down. I don't even thing that "intelligence" is something you can optimize for, even given what I said above about the g factor. I don't think you'll be able to write something with k > 1, either, but honestly I'm willing to grant that! If you can tell me what "intelligence" is, and give me a coherent plan for how you're going to build an AI which can optimize an AI for it, then goddamn it, I will change my mind entirely, full reverse, sit down and write MIRI a check.

Okay, I'm going to move back to Hanson. Hanson is in the middle of a debate with some other dude. Carl Shulman is very angry about the idea of whole-brain emulation, and says that once we build emulated minds which run on computers, we'll have created a slave caste, subject to legal murder when they are no longer useful. Hanson chooses to engage, for some reason, but the long and short of it is that Google will probably have no more right to murder its emulated employees than its regular ones. As an aside, why the gently caress is it always Google? We're always the assholes in these wild fantasies, murdering and enslaving emulated people. I promise you right here, no one I work with wants to enslave or murder emulated humans.

After he's done with that debate, he moves back to engaging with Yudkowsky, but he does it all wrong. He's committed to his belief that a hard takeoff is impossible, and he's just not right about that. His position is nearly indefensible. I'll try to represent it as fairly as I can but no one I know agrees with him. He argues that Yudkowsky is handwaving too much about this recursion and that all of his "sources of discontinuity" don't represent fast discontinuities, but rather sources of discontinuity over the course of years or decades. We didn't go from blind sea creatures to eyeballs in a day, despite cascades. He argues that Yudkowsky is failing to account for the source of bias which leads people to view history as goal-directed; he's arguing by cherry-picking his examples and ignoring all the other instances where recursion caused nothing much to happen.

Yudkowsky comes back and says "but those cases were not recursive enough!" It's not an interesting argument to me, but Yudkowsky draws a distinction between things which are like an optimizing compiler (things which can improve performance, but only once) and things which are like a uranium pile (where improving the pile improves the output indefinitely and sustainably). I'm willing to grant him that, but I would note that all I've agreed to here is that it is conditionally possible that his preferred scenario will occur. I still need to be convinced that there's such a thing as "intelligence", and that there's a way to build an AI to optimize it with k > 1. Even after I've been convinced of those things I still need to know that it's likely. Lots of things are possible, but I don't plan my life around the possibility that I'll be struck by lightning on a cloudless sunny day.

Yudkowsky then demonstrates that his knowledge of nanotechnology comes exclusively from science fiction. He tries to take a step back from the FOOM debate (even though he's already made his point) and points to nanotech as something else which is recursive. Nanobots can make more nanobots which can make more nanobots, etc. So a single nanobot factory is enough to take over the entire world, because it can reproduce infinitely. Even if you only make your perfect nanobot factory a week before the researchers on the other side of the world, it's too late, you've won. He's right about this too, in exactly the same way as with his AI FOOM. If you can do a thing that no one agrees is possible, then you will destroy the world. He's made his point but he never seems to notice what else he has to prove before he's done. And Hanson can't seem to call him out on it, because Hanson is still trying to pretend that cycles aren't possible.

His next argument is probably his best, which is that there really aren't any purely recursive things in reality. We always need to feed in some input into the system. Nanobots eat more and more resources, maybe AIs eat more and more CPU cycles. Who knows? But it's not fair, he says, to postulate that your particular singularity will be the first ever purely recursive system. He goes back to the concept of total war, again. He seems to believe that the answers to my two important questions ("how do you optimize an AI for optimizing an AI for intelligence" and "how do you make sure k > 1") are "international cooperation and research". He doesn't communicate it very well, but he seems to believe that nobody is going to develop the answers to those questions suddenly and by themselves. Therefore the first strong AI will come from the collaboration dozens of universities and won't be able to suddenly and immediately FOOM; there'll be a slow and steady climb from k = 0.01 to k = 0.99, and when k = 1.0 it won't immediately spike. Improvements or iterations of Yudkowsky's cycle will still take a lot of time, there'll be plenty of time to ensure that the AI is still sound every step of the way.

Hanson posted:

I have trouble believing humans could find new AI architectures anytime soon that make this much difference. Some other doubts: ... Does a single "smarts" parameter really summarize most of the capability of diverse AIs?

Yudkowsky replies with something I find extremely ironic; a giant list of Kurzweil's predictions about the year 2009. Most of them were false then, most of them are true now. The point he's trying to make is "the future is very hard to predict", or "maybe we will find such an architecture", but of course he cannot provide any reason to believe that we will, just make the point that there's no reason to believe that we won't. Gosh, this sounds like the sort of disagreement two rationalists would have if they disagreed about priors. If Yudkowsky had read what me and (was it Cardiovorax? Or SubG? Or su3su2u1? I've forgotten) said about Aumann's Agreement Theorem, then he would have ended the debate here. But when he read the theorem, he missed all the caveats and so he'll continue his argument to the bitter end.

There's almost 200 more pages at this point, but I'm really not going to make any more interesting points with it. Yudkowsky's conditionally right if he can demonstrate those two things I've been mentioning over and over, Hanson's right otherwise. Nothing will change this, but for the sake of completeness I'm going to summarize the rest of the arguments.

Hanson points to an example of an AI which was able to perform recursive self-improvement, which is an amazing feat of engineering and absolutely brilliant. It's called EURISKO and you might want to go look it up on your own, legitimately impressive. Of course it didn't end the world, and the researcher who made it came to the conclusion that what's really going to matter in AI is data. When he finished with EURISKO, he went and created a system called CYC, which is essentially just a collection of reasoning systems and facts. Hanson loves it because it's actually useful for AI researchers and can answer encyclopedia questions. CYC is actually a lot like Knowledge Graph, which is what Google uses to give you those nifty little answers when you type in a question like "what is the capital of colombia?". Yudkowsky hates CYC with an undying passion, and that made me very angry because my thesis uses a lot of concepts from CYC. We both treat language as a collection of useful identifiers. "Paris" doesn't mean anything, "Capital" doesn't mean anything, "France" doesn't mean anything, but "Paris" "is-capital-of" "France". "Paris" "has-latlng" "48.8567N, 2.3508E". "Paris" "has-population" "2.21 Million". "Paris" "is-type" "City". This is an incredibly powerful model once you add on one simple thing: interaction with the real world. Whatever sensor your robot has, give it a thousand examples of a city, a thousand examples of a person, a model of the globe. Once you've done all that, you have built a knowledge engine. Your robot can be said to "know" things at least as deeply as we do, in that it can recognize novel versions of them and make predictions about them and reason about them. Yudkowsky hates this because he hates the idea that something can be intelligent in only one way. Like SubG said, he really needs to believe that intelligence is a single scalar value, and so he thinks that this model of intelligence is almost blasphemous. I had a professor in school who felt the same way, but it's not the prevailing view.

Hanson argues that manufacturing is a bottleneck for the AI, it can't build infinitely many infinitely fast processors instantly, it's going to take tons of time and effort and that's plenty of time to slow the thing down and examine it. Hanson says Yudkowsky seems to be assuming that there's no additional inputs into the system once k > 1.0, and that that's just not true. He uses some economics math that I'm not qualified to evaluate (but he sure is) to demonstrate that production will be increasingly less local as time goes on, so this obstacle will not cease to exist in the future. Hanson does not argue, but should, that at some point hardware will plateau. This is not something that people were sure about several years ago, but to paraphrase my friend (M.S. Electrical Engineering, designs processors for Intel), we're hitting the limits of physics now and Moore's law can't possibly continue unless something really impressive happens.

Yudkowsky says that whatever math you can do doesn't matter because the AI is an entirely new type of thing, which no established models can predict.

Hanson says that when you're making predictions about an entirely new type of thing, which no established models can predict, you make predictions with your new model in the very near very close future, check them, and then if they're right, see what your model says about the far future. Yudkowsky refuses to make a near-future prediction, citing his preference to remain abstract.

Yudkowsky restates his position, and specifies a few negative predictions at Hanson's prompting. Yudkowsky will consider himself wrong if:

Yudkowsky posted:

1. An ad-hoc self-modifying AI undergoes a cycle of self-improvement, starting from stupidity, that carries it up to the level of a very smart human - and then stops, unable to progress any further.
2. A mostly non-self-modifying AI is pushed by its programmers up to a roughly human level... then to the level of a very smart human... then to the level of a mild transhuman... but the mind still does not achieve insight into its own workings and still does not undergo an intelligence explosion - just continues to increase smoothly in intelligence from there.
3. No one AI that does everything humans do, but rather a large, diverse population of AIs. These AIs have various domain-specific competencies that are "human+ level" - not just in the sense of Deep Blue beating Kasparov, but in the sense that in these domains, the AIs seem to have good "common sense" and can e.g. recognize, comprehend and handle situations that weren't in their original programming. But only in the special domains for which that AI was crafted/trained. Collectively, these AIs may be strictly more competent than any one human, but no individual AI is more competent than any one human.

Of course, we can't test a negative prediction; those things haven't happened, but there's no good reason to believe them to be less likely that Yudkowsky's preferred scenario.

Hanson says that in addition to all that he's said, economics (which, again, I am not qualified to evaluate) says that hockey-stick plans (referring to the shape of a graph, specifically ones with huge sudden sustained spikes in growth rate) are rare and you should never plan on them. I wish that Hanson was advising venture capitalists.

Yudkowsky says that sustained strong recursion just is like that and you can't use existing models to model it because it's fundamentally different.

Hanson says that "friendliness", as Yudkowsky defines it, is exactly the sort of thing that economics is qualified to talk about, it's just game theory plus some other stuff. I think that Hanson is pulling an XKCD 793. (yes, I know XKCD is garfield for grad students, I was a grad student, shut up)

Yudkowsky addresses a concern that no one has raised, but that someone should have raised. I'll devote some time to this one: He says that sometimes people hear his ideas and reject them, and the reason they give is "you don't have a PhD!". Personally I believe this; it seems that in order to have weird beliefs you are supposed to have a PhD. He gives the example of Eric Drexler, who he really likes, who did basically the same thing as Yudkowsky except with molecular nanotechnology. People rejected his ideas because he was just some autodidact kid, so he went and studied hard and got a PhD, and then wrote what I understand to be The Book on molecular nanotechnology. Awesome. Yudkowsky is unsatisfied with this because he doesn't think that people listen to Drexler any more than they used to. He's wrong, but I think I understand why. Here's what he says: "But did the same people who said, 'Come back when you have a PhD', actually change their minds at all about molecular nanotechnology? Not so far as I ever heard." No, Yudkowsky. You wouldn't have heard, that's not what happens when academia changes its mind. Instead, you'd just see a ton of citations of the book Nanosystems. Google Scholar says it has ~1700, which to me says "mainstream acceptance". Woo! But nobody came bowing and scraping to Drexler, saying "you were right all along, we are fools, please forgive us oh mighty one", like in Yudkowsky's academia fan fiction, that Bayes cult he's depicted a few times, which appears to argue that scientific discoveries should all take mere weeks and that the highest virtue is the ability to do arithmetic in your head and be confident in it. Yudkowsky won't go get a PhD, because he doesn't believe that I'd agree with him once he did. And he's right about that! I wouldn't, but getting a PhD would give him the common background with me to be able to talk in the right way that I can understand his ideas, follow his logic, and then decide whether I agree or not. And it would give him the common ground to understand my response, and we could have a dialog. That's why academically minded people tell him to go get a PhD, not just because they want to dismiss him (though, of course, that's part of it), but because if he did go get one, then maybe we'd be able to talk.

Hanson, of course, already has a PhD, and is a professor. I assume he doesn't respond out of politeness.

Hanson argues that AIs, while they're developing in their slow advancement towards greater and greater intelligence, will share information. He says that it's obviously preferable to share insight rather than to keep it all secret, unless you assume that your fellow researchers / AIs won't do the same. Again, as an economist he feels qualified to talk about this; I'm not qualified to judge it.

Yudkowsky doesn't really respond to this anymore, instead he says that he thinks that emulating human brains, like Hanson proposes, is a cop-out. He says that nothing counts unless we as humans understand intelligence and build it into a system. A system which evolves intelligence as an emergent phenomenon is bullshit, and we're wasting computing cycles trying to make it. I agree with him, actually, although Google Brain and various other deep learning systems seem bent on disproving us. There was a recent-ish development that makes me feel right, but again, people are divided on this. Read details here if you're interested.

Hanson notices that Yudkowsky is not really debating him anymore, and decides to finish the whole thing up with a few more posts. They've agreed to disagree after all. Gosh, if only they knew about the agreement theorem. I'm going to quote what I consider his concluding arguments:

Hanson posted:

So I suspect this all comes down to how powerful is architecture in AI, and how many architectural insights can be found how quickly? If there were say a series of twenty deep powerful insights, each of which made a system twice as effective, just enough extra oomph to let the project and system find the next insight, it would add up to a factor of a million. Which would still be nowhere near enough, so imagine a lot more of them, or lots more powerful.

This scenario seems quite flattering to Einstein-wannabes, making deep-insight-producing Einsteins vastly more valuable than they have ever been, even in percentage terms. But when I’ve looked at AI research I just haven’t seen it. I’ve seen innumerable permutations on a few recycled architectural concepts, and way too much energy wasted on architectures in systems starved for content, content that academic researchers have little incentive to pursue. So we have come to: What evidence is there for a dense sequence of powerful architectural AI insights? Is there any evidence that natural selection stumbled across such things?

And if Eliezer is the outlier he seems on the priority of friendly AI, what does Eliezer know that the rest of us don’t? If he has such revolutionary clues, why can’t he tell us? What else could explain his confidence and passion here if not such clues?

Basically, "Yudkowsky wants to believe this, and I get why he wants to; it makes him very important. But nobody else agrees with him, and his math is unconvincing, and he can't seem to make a persuasive argument for why it's true."

Yudkowsky, of course, has no real response. His closing argument:

Yudkowsky posted:

That's the point where I, having spent my career trying to look inside the black box, trying to wrap my tiny brain around the rest of mind design space that isn't like our small region of temperate weather, just can't make myself believe that the Robin-world is really truly actually the way the future will be.

That's Yudkowsky for "man, I really just want to believe this, and you'll have to take my word for it because I've thought about it a lot."

That's all there is on the AI FOOM debate. Let me know if some of this makes no goddamn sense, or if you read it and you want to talk about something; I'd be happy to expand on anything I've written here. Hell, if you know what I'm talking about better than I do, correct me.

Thanks for reading, I know this was a hell of a lot of text.

sat on my keys!
Oct 2, 2014

Wow, that is an awesome - and intimidating - effort post.

SolTerrasa posted:

But nobody came bowing and scraping to Drexler, saying "you were right all along, we are fools, please forgive us oh mighty one", like in Yudkowsky's academia fan fiction, that Bayes cult he's depicted a few times, which appears to argue that scientific discoveries should all take mere weeks and that the highest virtue is the ability to do arithmetic in your head and be confident in it. Yudkowsky won't go get a PhD, because he doesn't believe that I'd agree with him once he did. And he's right about that! I wouldn't, but getting a PhD would give him the common background with me to be able to talk in the right way that I can understand his ideas, follow his logic, and then decide whether I agree or not. And it would give him the common ground to understand my response, and we could have a dialog. That's why academically minded people tell him to go get a PhD, not just because they want to dismiss him (though, of course, that's part of it), but because if he did go get one, then maybe we'd be able to talk.

Yud, and by extension most LW people, seem deeply confused about how academia and actual research work. I'm reminded of his plan to go form a "math monastery" in South America, because of how insane the STEM funding environment in the US is (proof of the broken-clocks theorem if I ever saw it).

SolTerrasa posted:

Instead, you'd just see a ton of citations of the book Nanosystems. Google Scholar says it has ~1700, which to me says "mainstream acceptance".

Is 1700 citations really that much for a book that was published in 1992? Juan Maldacena's original ADS/CFT paper, which basically invented a new joint subfield of condensed matter theory and high energy theory in physics, has over 10,000 citations. This "create a whole new field of molecular nanotech" thing Drexler was attempting to do sounds exactly like what Maldacena actually did, and yet Maldacena has been 10 times as successful even though Drexler had an 8 year head start. Or am I looking at this wrong, and molecular nanotech is a more niche area than theoretical physics crossover? If it is sensible to compare these two things it doesn't surprise me that Yudkowsky thinks Drexler was ignored, because it seems like he sort of has been.

potatocubed
Jul 26, 2012

*rathian noises*

SolTerrasa posted:

I'm going to make a strong effort to finish the rest of the 400 pages in front of me in this post.

Thank you for that. It's interesting to hear about this stuff from someone who knows what they're talking about and is willing to spend time writing it down in a form non-specialists can understand.

SolTerrasa posted:

As an aside, why the gently caress is it always Google? We're always the assholes in these wild fantasies, murdering and enslaving emulated people. I promise you right here, no one I work with wants to enslave or murder emulated humans.

Justine Tunney has a lot to answer for.

SubG
Aug 19, 2004

It's a hard world for little things.

SolTerrasa posted:

If the only thing you take away from my analysis is this, you've pretty much got the gist of it. If you can build an AI which can optimize an AI for "intelligence", and you can achieve k > 1.0, you will probably cause the singularity.
Nah. Unless you're trying to implicitly stipulate that once you get your first AI---or rather an exceedingly specific sort of AI which is actually entirely unlike any model of intelligence derived from actual observation and instead is conjured ex nihilo entirely to make a singularity-type scenario possible---you can do everything else in situ. That is, part of the AI designing a better AI is never going to be `build a fab for an entirely new architecture'. Unless we're also expecting the AI to just miracle away all of the logistics problems associated with doing poo poo in the real world.

You're also implying not only that an AI will be able to build a better AI more or less for free, but also that this is something which can be iterated an arbitrary number of times.

I mean I guess it's silly to start worrying about practical considerations after we've just hand-waved away all objections to the fact that we don't have a good model for intelligence, that even if we had a good model that wouldn't imply that we could build one, even if we could build one that doesn't imply that it would have a knob for adjusting the level of intelligence, and even if we could adjust the intelligence that doesn't mean that it would necessarily be `intelligent' in all conceivable ways (so that it would just know how to do whatever it was it needed to do to self-improve, whatever field of knowledge that required), and even if it was `intelligent' in whatever arbitrary ways it needed to be to self-improve that doesn't imply that it would. And after stipulating all of this, stipulating that the hardware costs associated with squeezing out a series of exponentially better platforms in an exponentially shorter period of time will actually be lower than, say, re-tooling a line for Pentium n+1, and further stipulating that we can just do this until our really, really smart self-improving computer has a literally infinite INT stat on its character sheet, all that's just rounding error.

And presumably our first AI won't have thought of all of this, or if it does it's surprisingly copacetic with the thought of becoming instantly obsolete. Indeed, it will be sanguine about the idea that we conjured it into existence entirely so that it would make itself immediately obsolete for us. Because in the future of the singularity, nobody would ever think of just recycling a computer that's literally infinitely less capable than what you can currently order online at that moment. Because in the future of the singularity, recycling is murder.

Goon Danton
May 24, 2012

Don't forget to show my shitposts to the people. They're well worth seeing.

bartlebyshop posted:

Is 1700 citations really that much for a book that was published in 1992? Juan Maldacena's original ADS/CFT paper, which basically invented a new joint subfield of condensed matter theory and high energy theory in physics, has over 10,000 citations. This "create a whole new field of molecular nanotech" thing Drexler was attempting to do sounds exactly like what Maldacena actually did, and yet Maldacena has been 10 times as successful even though Drexler had an 8 year head start. Or am I looking at this wrong, and molecular nanotech is a more niche area than theoretical physics crossover? If it is sensible to compare these two things it doesn't surprise me that Yudkowsky thinks Drexler was ignored, because it seems like he sort of has been.

Yeah, he basically has been. It's no real surprise that Yud (and Kurzweil and others) are drawn to Drexler, because he's an unquestionably brilliant man, but a lot of people with experience on the scales he's talking about thinks he's absolutely nuts. His idea consisted of "make a robot that makes a smaller robot that makes... until you get to a molecular assembler," which runs into a bunch of hard physical limits just like Yud's AI. He had a fairly highly publicized slapfight debate with a Nobel laureate over it, which can be summed up by the question "are atoms basically legos?"

The basic idea was that the molecular assembler is a nanoscale equivalent of a factory robot, which can pick up atoms and put them into the proper place in the nanomachine you're building. The first big problem here is called the "fat fingers" problem: how do you pick up one atom without it dragging more along? Atoms bond to one another, and the things you'd build a machine out of (carbon) form exceptionally strong bonds to one another. Assuming that you've bypassed that problem, congratulations! You've successfully broken the bonds and pulled that atom free! However, chemistry dictates that if you managed to break all of those bonds, it's because whatever you've picked it up with formed an even stronger bond (unless it happened for entropic reasons, but saying "a crapload of disorder results" is not exactly a strong argument for a precision robot). So how do you break it when you want to drop the atom off? This is called the "sticky fingers" argument. Drexler notably didn't want to talk about those issues, and just wanted to focus on atomic scale phenomena from a "mechanical" perspective (ie: legos).

MinistryofLard
Mar 22, 2013


Goblin babies did nothing wrong.


Eliezer Yudkowsky's facebook page posted:

This is why most taxes should be on consumption (value-added tax, luxury tax) and fixed resources (land value tax); while capital gains taxes and corporate income taxes and income tax should all be zero:
***
"Did you know that our current Grand Treasurer is a dracon? And into his hoard goes the tenth part of the increase of the kingdom's treasury, to harness his greed for its management."

"The tenth part of the increase?" I exclaimed, shocked down to my sandals. I couldn't even imagine how many drachmas that worked out to. "Wouldn't, um. Wouldn't removing that much money from the national economy have macro-level effects?"

"Ah! But you see, Lord Droon is a touch saner than other dracons. Droon does not hoard gold or jewels or dwarfwrought treasures. There is no paying people to dig up metals and then paying other people to guard them. Instead, Lord Droon's hoard consists of a number of embossed parchments - certificates saying that he owns certain businesses and concerns within the kingdom. Droon's riches are real; he could sell those embossed parchments for gold or jewels any time he pleased. To my knowledge, Droon is wealthier than any other dracon for ten thousand leagues. And yet nobody goes hungry just because Lord Droon sleeps on a dwarfwrought chest full of parchments. Droon spends none of his wealth on mansions or finery. All the income of Droon's parchments go into Droon's businesses or other investments, to buy dwarfwright machinery or send out caravans. So Dwimber's people thrive, and the Dwimbermord's treasury grows, and Droon gains the tenth part of that increase as well - all as more embossed parchments. Lord Droon's hoard sequesters only abstract concepts from circulation, while in the real kingdom seed-grain flows from his hands like water. Lord Droon is the prisoner of his greed as much as any dracon, and yet he has taken a step beyond that. He has harnessed his draconic greed, the desire imposed on him by his magic, and shaped it to help others instead of harming them."
***
Don't tax Lord Droon just because he wants to sleep on a chest full of abstract concepts. You'll interrupt the process that causes other people to receive seed grain and dwarfwright machinery. There's no cause to envy Droon while he goes about in simple clothes and works sixteen-hour days for other people's benefit. Trying to take away his precious parchments is nothing but spite. The tax that Lord Droon pays should be zero until he actually tries to spend money on mansions or finery. That's what's best for the kingdom, and it is both fair and just.

If you want to slap a 300% luxury tax on giant yachts, that's fine by me. But if "rich" people are sending material goods to other people instead of themselves, like by taking billions of dollars of "personal income" and using it to "buy stocks" that "double in value" while they live in a tiny apartment, then you shouldn't dip your fingers into their philanthropy. (Beyond the standard tax on their tiny apartment.) Until, of course, the person tries to actually buy mansions and finery instead of more parchment, whereupon I suddenly agree that they've revealed themselves to be rich after all and can justly be taxed quite heavily. A tax policy like that does encourage people to buy parchments instead of mansions, but there's nothing wrong with promoting charity. It all becomes much more intuitive once you understand how Lord Droon managed to fool his sense of greed.

To sum up his argument, capital gains and income taxes shouldn't exist because the wealthy and corporations don't own real things with expenses associated with them and thus all that money goes back into the economy so we should encourage this. Rather, all taxes should be based on consumption taxes (like sales tax/VAT/GST/whatever your local equivalent is) or luxury purchase tax.

Like, just reading it I can see the initial problem here: correct me if I'm wrong, but as a general rule governments need a lot of money to provide even the most basic services and you've got to take it from somewhere, and its significantly better to take it from people with a lot of money/assets such as, say, individuals and organisations with significant amounts of capital rather than the poor, because those with capital can still buy food at the end of the day whereas those without can't. Sure, you can shift your taxation off onto luxury goods like Yudkowsky suggests, but then those who would otherwise be buying mansions won't buy those luxuries anymore (or at least, buy them in smaller amounts) and suddenly your tax income has plummetted and you can't pay for basic services.

Big Yud acknowledges that people might be less inclined to spend money on luxuries, because of increased luxury tax, but he completely fails to acknowledge that that means that your prime source of tax income under his model has dissappeared. But thats okay, because something something job creators.

I dunno, I don't have a particularly deep economics background so I might be off, but this strikes me as essentially the job creators argument. Except it doesn't make sense because he assumes that by buying physical goods you actually take the money out of the economy and are basically hoarding it. But you're not because even say, a mansion, requires upkeep, maintenance, etc. Even in his model, by hoarding the gold the dragon is creating jobs because it needs guards, etc., so it is still "giving back". Can somebody back me up here/tell me why he's full of poo poo/tell me why he's right?

Because it seems to me he should drop economics and stick to what he does best: nothing.

Spazzle
Jul 5, 2003

It's time to reject the reflexive kowtowing to the supposed genius of people like drexler, kurtzweil and yud. They are cranks who get famous by telling the rich and nerds that they are special and that their worldview is correct.

The Vosgian Beast
Aug 13, 2011

Business is slow
Let's see how SlateStarCodex is doing!

SlateStarCodex posted:

the dysgenic effect long believed to exist from poor people having more children has stabilized and may be reversing, at least among whites.

Actually let's not. Continue yudmocking

Adbot
ADBOT LOVES YOU

Goon Danton
May 24, 2012

Don't forget to show my shitposts to the people. They're well worth seeing.

Consumption taxes are a terrible idea if you're trying for basic Keynesian economic policy or trying to tax the people who can actually afford it.

To start, his whole idea that rich people spend their extra money is completely goddamned wrong. This is related to Marginal Propensity to Consume, which is essentially "if I gave you a dollar, how much of it would you spend and how much would you save?" MPC drops as income rises, so the poor will spend nearly all of a windfall on things they've been putting off due to lack of money (car inspections, doctor's appointments), while richer people will sock some away for retirement/a rainy day/etc. Also, since MPC is higher for the poor, consumption taxes disproportionally hit the poor pretty much by definition, so you're trying to squeeze blood from a stone while the rich go pretty much untouched.

Also, if you're trying to maximize the amount of money circulating, it should go without saying that you want to encourage spending and discourage savings. So why would taxing people on purchases encourage them to purchase more?

Also also, the last thing you want for a stable economic policy is a tax base that collapses the moment a recession hits and consumption stops.

  • Locked thread