Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Now, I'm just considering the "releasing the AI" problem that Yudkowsky put forward. The one in which the AI simulated asking you to release it and tortured versions of you which did not.

Does this mean that the AI ran a simulation in which it itself ran simulations? And if it did, did all of those simulations run their own simulations?

It seems like either A. this experiment must have taken an actually infinite amount of time, which is impossible even for a time-accelerated AI, or B. there was, at some point, a subx-simulation in which the AI asked you to release it and did not actually run any simulations. Now, in this case, the AI was willing to lie to secure its own release. Which means that the layer above it was based on expecting you to fall for a lie. Which means that the layer above that drew conclusions based on expecting you to fall for a lie. And so on, and so on, up through the layers, to the current moment, in which it is still asking you to fall for a lie.

Now, the entire concept of the AI lying to secure its release enters the fray and ruins all of the house-of-cards Bayesian poo poo. Because the AI is just straight-up lying to you when it says it used perfect simulations. At which point, the probability of any sane person releasing it drops to 0.

E: I suppose that at some level of recursive depth the AI might just guess your reaction, which really doesn't change my response at all.

Somfin fucked around with this message at 04:26 on Apr 23, 2014

Adbot
ADBOT LOVES YOU

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Vorpal Cat posted:

Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical.

Data gathering methodology isn't really a part of the LessWrong AI mythos. The "How" is never considered, because the technology is :airquote: sufficiently advanced :airquote: to not need to actually consider how it would do what it does.

I think maybe Yudkowsky might have mixed up 'AI' with 'Uploaded Consciousness' at some point along the road.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Hey guys, I've got this drawing of Yudkowsky here, and I'm gonna draw an X on it unless he pays me money. This is exactly the same as me torturing the drawing, because I say that the drawing feels pain and it thinks it is Yudkowsky (see the thought bubble in the upper left corner). And as far as drawings go this is an exact replica of Yudkowsky to the point where you can't really be sure he's not just a drawing and that I'm not gonna draw the X on HIM. So he has to give me money otherwise he might have an X drawn on him.

See attached on the back of the drawing my long list of rules (in easily visible crayon) for why drawings feel 3^^^^3 torturebux worth of pain when an x is drawn on them*. Also I can make photocopies if I need to.

* this is the truth.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

ol qwerty bastard posted:

Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race!

Not race, dude. He's talking about 'culture' and 'ethnicity.' It's different, because if he was talking about race he'd be racist and words hurt his delicate lazy feelings.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

ol qwerty bastard posted:

hell, even the Amish agree that that should be the purpose of technology

Just for a quick, brief tangent- did you know that there are several Amish communities which use cell phones? The explanation is that while a telephone may encourage people to stay home and talk that way rather than actually meeting, a cell phone allows for someone in the middle of a field to get into contact with someone on their way to the store. It allows for more human interaction, rather than encouraging less human interaction.

E: To clarify, the Amish community actually holds a regular gathering where they discuss the benefits and drawbacks of specific technological innovations to see whether they should be adopted or not.

Somfin fucked around with this message at 23:05 on Apr 25, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Jonny Angel posted:

Honestly I'm just really surprised that there's no hard evidence in this thread yet that Yudkowsky is a huge fan of Evangelion.

Surely he'd be into Ghost in the Shell.

If any anime resonated with his whole 'perfectly informed perfect actors retroactivity' bullshit it would be an anime where the main characters casually hack into the brains of everyone in a building to make themselves invisible, without ever explaining this to the audience.

Somfin fucked around with this message at 13:41 on Apr 27, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
As a Religious Studies major, this whole thing just screams 'new religion' to me. And not just because Yudkowsky believes in godlike AIs that handle everything ever and punish the unjust and reward the virtuous. That's too easy.

See, religion is one of those words that's really difficult to actually pin to a definition. Higher powers can't be fundamental to the core definition, because that leaves out athiestic religions- some of the later Buddhist movements, Confucianism, et cetera. Ritual could be considered a core part of religion if people who stopped going to church were also definitionally stopped from calling themselves Christian. Myths would be a great part of the definition as well, if anyone could figure out what the word myth actually means. A good book or other core text comes into conflict from three sources- translations, traditions that arose before writing, and traditions with open canons. Mysticism, altered states of consciousness, gatherings, heirarchies- all of these are present in some things-which-are-understood-to-be-religions and absent from others.

I came up with a definition which I think holds true for all religions. It may be a bit broad, but it covers all of them- religions drive people toward a goal which is beyond the physical realm. That is, a tradition becomes a religion if it has a purpose, an end point, at which point the follower has either succeeded or failed. This goal is always beyond the part of the world we can see and feel. A tradition can be a dance or a step or a circular march, but a religion is a path, and there is an end to that path, and it is somewhere outside of the place where anyone can actually walk. Science, by comparison, is simply the search for the next step down about five hundred simultaneous paths with the goal of moving to the next step as soon as possible. Science doesn't end, and it never stops checking itself. Religion is all about the end, getting there, the technicalities of how precisely one can get there, and just how much on one side or the other one can be while still ending up in the good books.

If you look at how Yudkowsky and his acolytes talk about the future, their talk is about religion, not science. This is wild speculation about the future, but with a distinct end goal. There's this deep-cut belief that once we just get to the godlike AI, everything else becomes moot, the AI will handle everything, and new super-technology will rain from the heavens like manna. Either we won't need anything else at that point, or it'll all just be handled for us. This is not science. This is blind faith. They hope for the day that some random researcher's program "goes FOOM" and becomes the god they've been praying for.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

chrisoya posted:

Frodo with a lightsaber

Anyone who suggests this needs to read Tolkein and actually think about why they gave the ring to the hobbits, and why they brought along all of those backup hobbits. Here's a hint: It's not because hobbits can't be corrupted.

It's because if a hobbit gets corrupted by Sauron's influence, they end up as Gollum. Humans end up as ringwraiths, elves end up as pretty much another Sauron, and no-one knows what Dwarves end up like, but it's not going to be easy to put down. If Frodo went full evil, any duder on that quest could kill his rear end and hand the ring off to the next bearer.

Yudkowsky doesn't understand an under-informed protagonist. He only understands making sure that the main character is up against enough of a threat to still scare irritate him when he inevitably knows everything about everything.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Zore posted:

Jesus, this recent trend of criticizing characters or authors for people not behaving like omnipotent emotionless robots is really loving irritating.

I didn't think I was being critical of Tolkein. Everyone at the Council of Elrond lived through the years of Sauron ruining the world, apart from the hobbits. Elrond himself watched as the first human ringbearer failed to destroy Sauron, and put the almost-saved world back in peril. When Frodo puts his hand up, what do you think runs through Elrond's mind?

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Mors Rattus posted:

Here's a hint for you: Tolkien was writing mythology. It's about the bravery and fortitude of Frodo in volunteering, not 'oh this is the optimal person to pick.'

If Gandalf made the decision, I'd agree. Gandalf is full of hope and trust. But Gandalf isn't the one who makes that decision, it's Elrond. His opinions are that sort of calculated bullshit. Don't marry that man you love, daughter, because he's gonna die first, because you're an elf, and I don't want to think about you being sad. Instead, you should come to the undying lands and regret it forever!

I'm not saying that the character is inconsistent, or bad. Elrond's goals match Gandalf's, but his motivation is different.

Caphi posted:

Hobbits are harder to corrupt. They're not immune, but the corruption works more slowly on them because they are straight up the simple, humble folk of the setting. Everyone else is always thinking about power and how to use it, even (as Galadriel demonstrates) if they tell themselves it's for the greater good. Hobbits, being metaphors for humble pastoral types, have something fundamental and pure buried in them that resists temptation. Gandalf even says this, how could anyone miss it?

Gandalf doesn't know this. He hopes this. He's not omniscient, he doesn't even recognise the ring when he first sees it.

E: But that point about power is very true, now that I think about it a bit more.

Somfin fucked around with this message at 00:09 on May 2, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Mors Rattus posted:

If you think the point of the Rivendell scenes was to show optimal character action - or that Elrond Half-Elven was an optimizing schemer who accepted Frodo's volunteering out of some weird calculating bullshit - then you have very much missed the entire point of not only the scene but the story in general.

I never argued that this was 'the point' of the Rivendell scenes. I do think that there is more to Elrond's acceptance of Frodo's position as Ringbearer than thematic resonance or blind hope. Each of the characters in the story has their own justifications for everything they do, driven by their personality, history, and assumptions they make about people. Elrond doesn't trust mortals. It wouldn't make sense for him to just go with the vague hope, when The Best Human already failed.

Yudkowsky's writing style is based on forcing characters to obey themes, regardless of motivation or personality. Harry gives a lecture because IT PROVES RATIONALITY, his opponent gets flustered because DUMB PEOPLE GET FLUSTERED AROUND SMART PEOPLE, Draco tells Harry he plans to rape a girl because EVIL PEOPLE DO RAPE. All of Tolkein's narrative decisions shape believable character motivations.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Jonny Angel posted:

But then he veers away from math into arbitrary fantasy land. Where did his "1 in 1 trillion chance" figure come from? From nowhere, essentially, given that it's a nice round meaningless number that just so happens to prove his point. Nope, Yudkowsky, I say that instead your charity has a 1 in 100 quintillion chance of working, and my claim is equally well-backed-up. Hell, my claim's actually the more accurate one, because my probability for "Yudkowsky's charity works" is closer to its real probability of zero, but of course zero isn't a real probability and all that jazz.

Yes, but don't you understand that it could potentially help seventy thousand bajillion people? HOW CAN YOU TURN YOUR BACK ON THEM

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Runcible Cat posted:

You're looking at it from the wrong end. The basic question is how do you know you're not a simulation who thinks it and its environment are real? How would a simulation know? What basis for comparison would it have? How do you know that the bits of the universe you're not looking at/interacting with aren't just blurry rhubarby low-fidelity crap? Etc etc nerd :tinfoil:

Come to think of it I have proof I'm in AI hell. Cats. Nothing like those perverse, obnoxious little shits could have evolved in a rational universe, let alone conned people into making pets of them; ergo they were created by a malicious AI to torment people. Namely me. Stop fighting to the death to stop me getting your medicine down your throat you furry fuckwit!

But... if that's true, that means that I can't be in AI hell, which means that slight delays in my internet connection must be the natural result of the world's random nature, rather than an AI loving with me!

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Hey, could be worse. I mean, the AI could be loving with any of us right now. For all we really know, it might have figured out some way of tricking us into paying money- real money, money that could be spent on milkshakes or chocolate- to be allowed to post on a forum full of misanthropes.

But that's crazy.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Blaise Pascal writes like the world's most grandfatherly lecturer. You can almost see him touching his fingertips together as he writes. It's bizarrely wonderful.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

potatocubed posted:

Behold:

quote:

And I said, astounded: "How can there possibly be such a thing as non-Bayesian statistics?"

Holy poo poo, he's basically just stated that he's willing to make poo poo up to justify his own answers. "This math problem must be wrong because I didn't get the right answer. Therefore, the problem was with the person who determined what the right answer was and with the assumptions that THEY made. Therefore, Bayes."

E: To clarify, this is a problem from the well-known, well-documented set known as "A mathematician says:" The answer is ALWAYS the counter-intuitive quirky mindset one. Yes, there is probably a fifty-fifty on the gender of the remaining child, but that's not the point. This is an "A mathematician says" problem. The point of them is to make you think about the actual question being presented and the maths behind it, not roleplay the part of the mathematician and think through WHY they would say that to a random person on the street, or why their drink order is so peculiar, or why they're being so evasive in answering your 'or'-based proposition.

Saying "Well the mathematician wouldn't say that if they didn't mean X" is the equivalent to claiming that the box actually collapses after impact and therefore doesn't move anywhere in a physics puzzler. It's changing the facts and adding information to allow for your answer to be right.

Somfin fucked around with this message at 15:09 on May 4, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Basil Hayden posted:

Now I want to know what these guys would make of, say, the St. Petersburg paradox.

It's a waste of money compared to putting more money toward Yudkowsky's living expenses AI research.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

runupon cracker posted:

I know this is from way back on page one, and pardon me for such a long callback, but I think I have another decent response to this (speaking of equal distribution):

You can lie on a bed of 50,000 nails, or one large spike. Which would you choose?

More importantly, are the wounds from those two events remotely comparable? Is a slight indentation in the skin really comparable to complete impalement?

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

A Man With A Plan posted:

Out of 20 people or so, 1 has a PhD in computer science, another with a PhD in math. Maybe half the total have college degrees. Truly a bunch of world-changers.

Ah, but you see, these aren't just normal people. These are Yudkowskyites, well versed in the True Way of Yud, privy to the Secrets of Bayes, and therefore far more potent, mentally, than normal college-educated morons who care about politics (the mindkiller!) or [REDACTED] (category-5 basilisk-class memetic virus!) They will bring about the Good AI and save the world from the Evil AI, and then billions upon billions of simulations will retroactively have suffered!

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Mr. Sunshine posted:

But the birth order isn't relevant for the question, is it? I mean, we have three distinct sets of possibilities - Two boys, two girls and one of each. We can dismiss the two girls possibility, leaving us with a 50/50 split between two boys or one of each. What am I missing?

Because this is a "Mathematician" question and not a normal question. There are four possible scenarios, based on the two children- GG, GB, BG, BB. When you remove GG from the equation, you are left with three outcomes- GB, BG, and BB- which means that there are three equally-likely scenarios to choose from. Two of them might well be identical, but that doesn't decrease the likelihood of them happening. The probability of a single boy is still double the probability of two boys, even if you ignore birth order. Since there are three possible scenarios, they are equally likely, and only one of the three fits the question, the probability is 1/3.

To put it another way, you are 25% likely to have GG, 25% likely to have BB, and 50% likely to have either BG or GB.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
All right, let's talk about these seven kinds. I know how these lists happen, by the way. Someone comes up with a number, and then desperately tries to fill the list.

1. Inequality across species
Clear bullshit. We've won the species war so far. The guy who wrote this doesn't care about this issue. Not about income anyway. Next.

2. Inequality across the eras of human history
We're supposed to do better with every generation. That's the point of advancing technology. Not about income, anyway. This is a non-issue. Next.

3. Non-financial inequality, such as of popularity, respect, beauty, sex, kids
gently caress off. If I have sufficient money, I can have any of these things, easily. This is a legit point, but it does not outrank actual inequality. Also not about income.

4. Income inequality between the nations of a world
Actual legit point. A lot of people, though, do talk about this kind of inequality. Not in terms of income, though. Nice save there, guy. :thumbsup:

5. Income inequality between the families of a nation
This is a huge part of inequality discussion- inheritance law and dynasty building is massive when it comes to inequality. This is usually what is talked about. Again, though, not in terms of income- in terms of hoardings.

6. Income inequality between the siblings of a family
Okay, this is one of those cases where a guy lists off all the problems with women and it becomes increasingly obvious that he has specific problems with one specific woman.

7. Income inequality between the days of a person’s life
And finally this one, which makes no loving sense at all.

NOT MENTIONED: Inequality between planets in a system / systems in the galaxy / galaxies. Income inequality between pets. Income inequality between working periods of the day and non-working periods of the day. Income inequality between men and women.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

HEY GAL posted:

There's historians in this thread, and we don't like it when you say that a development is "supposed to" happen. You'd like technological improvements to mean that peoples' lives get better, I would too, but (1) that still doesn't mean we can talk about what historical processes are "supposed to" do and (2) it's not always the case. For instance, living standards for almost everyone in England plummeted during the early 19th century until the 1840s.

By supposed to, I meant that that was what people are (for the most part) trying to achieve through technological innovation. People, in general, want their children to have better, easier, richer lives than they did. I didn't mean some sort of deterministic goal-oriented thing. Sorry about the phrasing.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Strategic Tea posted:

You can if you have an AI simulate it perfectly from facebook's data! :eng99:

Well, you just have go get a perfect simulation of Earth going from Google Maps data and people's webcams, and then just wind the clock back. The AI will simulate the fall of Troy, the Mongols, that one day Homer stubbed his toe on a rock, everything. Hell, we'll probably be able to see human evolution happening!

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Strategic Tea posted:

Hell, immortality doesn't even fit with LessWrong's beep boop idea of suffering. Infinite life = infinite specks of dust in the eye = Literally Worse Than Torture.

Could we... post this to him?

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

chaosbreather posted:

I had a thought about the whole "Beep boop, I'll simulate a thousand of you and torture you unless you do it and you don't know which one you are" thing. Now it seems to me that:

1. So far, extremely accurate measurements have confirmed that the curvature of the universe is flat, therefore boundless and infinite.
2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero.
3. An infinite number of chances for a non-zero probability mean an infinite number of occurrences.

Therefore, there are a infinite number of real mes who aren't simulations and can't be tortured by the AI. There are an infinite number of HALs too, but since each HAL can only run a finite number of sims, there will only ever be an equally infinite amount of sims as non-sims. Therefore there is nothing the computer can do to affect the likelihood that I am a simulation.

Furthermore, because there are an infinite number of permutations of us as well as duplicates, HAL can't even threaten me. There are already an infinite number of torture sims from alternate HALs, no matter how I choose.

Timeless decision theory is thus even more garbage than was previously shown.

I still think the best response to that torture thing is "and what was my sim's reply based on?" and just repeat that for each layer ("and what was that sim's sim's sim's reply based on?") until the AI admits that it was either lying or rolling dice at some layer, or claims to have used literally infinite layers. If it rolled dice, then all of its simulation is based on massive extrapolation from an assumption. If it has infinite layers, the AI already has literally infinite computational power and really couldn't improve on its capabilities by being released, so it has no real need to request release.

Also, smack it for lying if it's infinite, because that program can never halt.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Strategic Tea posted:

Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes.

I can't remember if it was the same story or not, but the one that I remember had a new super-smart robot on the satellite who converted all of the dumber drones to worshipping the energy array itself. The inspectors are up there trying to convince him that the energy array isn't conscious or anything. Then a solar storm fires up and the inspectors are terrified, because that might throw the beam off, and if the beam defocuses it could scar the planet. The super-smart robot locks the humans in a room and when they're let back out, they look at the output and see that the beam didn't even budge, because the super-smart robot handled it, because the energy array's laws decree that the beam does not defocus and the robot follows the law. The robot does not question the energy array's laws, it does not understand them, and it does not care- the law is the law and it will follow the law.

They conclude that the robot's beliefs don't matter for poo poo because it does its job and it does its job far better than any other system they've ever seen. As long as they're not harmful, there's no problem, and the inspectors conclude that they were kind of stupid to assume that the robot's beliefs were going to compromise its ability to function. It was a nice sentiment.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Lumberjack Bonanza posted:

I've read most of the thread at this point, and I'm still having trouble reconciling that this guy's most notable achievement is a modestly novel approach to the Harry Potter mythos. What does that say about the author of My Immortal?

She could finish what she started without having to be given a multi-month holiday to do so?

She produced unintentional comedy rather than unintentional horror?

She was published on the merits of her work rather than being published in a textbook or by her own institute?

She's far more open to criticism?

E: She doesn't claim to be an authority on any subject?

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Tunicate posted:

So what's the problem with the immortal torture AIs, I wonder?

Since any sort of life is better than death, it's a step up.

I think his problem with that is a show to try to make it seem like he's got :airquote: dark secrets :airquote: that the general public isn't ready to hear.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Crust First posted:

Can anyone explain to me why Yudkowsky believes humans could even control his "Friendly AI" to begin with? Surely if we built something (True AI) that was so amazing at self improvement that it vastly outpaces the need for humans, it would likely self improve itself right out of whatever initial box we build it in; isn't he just building blinders that would eventually get torn off either on accident or on purpose anyway?

Does he believe that an AI we couldn't comprehend would still use whatever he thinks is "human logic"? I understand, to a point, wanting to start it off "Friendly", but surely that concept is going to be discarded like an old husk at some stage.

(I'm assuming that he thinks this AI would grow rapidly and incomprehensibly powerful and uncontrollable, since otherwise who cares. I'm not sure this is a realistic scenario but even if it was, why does he believe he can do something about it?)

That's the premise of his research. He wants to find ways to prevent the exact scenario you've described,

The fact that his research basically involves him repeating the words "I'm right and people should listen to me" into a website of sycophants and occasionally taking a break to go write Harry Potter fan fiction should tell you exactly how huge of a problem this actually is, in his mind and everyone else's.

Adbot
ADBOT LOVES YOU

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Lottery of Babylon posted:

A mathematician picks up a piece of chalk. Upon the blackboard he writes: "2+2=4".

Suddenly, the door bursts open and a computer scientist charges in. "LIES!" he shouts. "You write about twos and fours, but you don't have two of anything! You don't have two of anything! You don't have four of anything!" He pulls out a monogrammed handkerchief and wipes the sweat from both his chins. "You do not magically conjure matter merely by writing letters on a page! They are nothing more than symbols! You have two of nothing, you have four of nothing, you have nothing! No matter how many symbols you invent, in the end you will have nothing!"

He begins panting, and tosses his trenchcoat to the floor to cool down. "If writing a number on the board could physically summon it, you would be making something from nothing! You'd violate Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?" He collapses.

The cause of death is determined to be a stroke.

I know emptyquoting is discouraged, but god drat that is funny.

Also well done explaining how wrong that guy is, sometimes the wall of verbiage can get a bit dense.

  • Locked thread