Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
fade5
May 31, 2012

by exmarx

SurreptitiousMuffin posted:

What confuses me about the whole future robot hypothetical is: why torture?

Axeman Jim posted:

[B]ut surely if these simulations were sentient, they would be capable of genuinely experiencing suffering if the AI tortured them? That would mean that they would count towards the "minimize suffering" aim that this AI apparently has, for whatever reason. So by creating billions of them and then torturing them, the AI is massively increasing the amount of suffering in the universe.
Yeah, this is where I basically stopped trying to make sense of any of this. If the AI's goal is to minimize suffering, then torturing people is counterproductive to that goal. If the AI tortures people it's not "good", and that means bringing it about is actually counterproductive to minimizing suffering. I'm assuming the Less Wrong people consider AI simulations of people to be people, otherwise they wouldn't care what a hypothetical AI does to hypothetical simulations of people. (I felt dumber just typing that out.:suicide:)

Whoever said that this is just a re-skinned religion, you got it right, this is really similar to the "Problem of Hell", only a lot more confusing and with more bullshit about future technology. (Not to derail, but thinking about the "Problem of Hell" actually led to my current belief system of universalism/universal reconciliation, and the Less Wrong stuff really reminds me of that.)

Swan Oat posted:

Well it's a sufficiently powerful AI, you see. Read the sequences.
Make sure you don't confuse sequences with series though.:v:
It is a calculus joke, so Yudkowsky wouldn't get it.

Adbot
ADBOT LOVES YOU

fade5
May 31, 2012

by exmarx

Lottery of Babylon posted:

Yudkowsky posted:

The process of considering how to construct the largest possible computable numbers naturally yields the recursive ordinals and the concept of ordinal analysis. All mathematical knowledge is in a sense contained in the Busy Beaver series of huge numbers.
The Busy Beaver series sequence (dammit Yudkowsky)

fade5 posted:

Make sure you don't confuse sequences with series though.:v:
It is a calculus joke, so Yudkowsky wouldn't get it.
Hey, look at that, I was right earlier in the thread, Yudkowsky can't distinguish between the two.:laugh: Next you'll tell me he doesn't know that there are both finite series and infinite series, or that the are multiple types of series including arithmetic series and geometric series, or how to use basic tests to determine series divergence or convergence. I just had sequences and series in Calculus II, this is all very recent to me.

To explain the joke a bit, and to make sure I actually learned this stuff correctly in Calculus:

A sequence is a list of numbers, and the order in which the numbers are listed is important.
1, 2, 3, 4, 5, ... (this is an infinite arithmetic sequence)
4, 40, 400, 4000, 40,000 (this is an infinite geometric sequence)
Sequences are usually based in a mathematical formula.

A series is a sum of numbers, usually of a given sequence, so using the previous examples,
1 + 2 + 3 + 4 + 5 + ...
4 + 40 + 400 + 4000 + 40,000 + ···
are examples of series.

Or, written in series notation, (this was harder to type that I thought it would be)

∑ K = 1 + 2 + 3 + 4 + 5 + ...
K=1


∑ 4(10)K-1= 4 + 40 + 400 + 4000 + 40,000 + ···
K=1

So, in summation, (more Calculus jokes:v:) you don't know poo poo Yudkowsky.

fade5 fucked around with this message at 21:13 on May 6, 2014

fade5
May 31, 2012

by exmarx

SubG posted:

This all seems more plausible and frightening if you know essentially nothing about either humans or technology. The superhuman AI will be fantastically terrifying until it spontaneously bluescreens. Or its plan for world domination is interrupted by a couple Comcast routers making GBS threads the bed. Or it decides to dedicate all of its human-like intelligence to surfing for porn and playing DOTA all day. Or whatever the gently caress.

Accidentally ending up with a world-destroying or world-dominating superhuman AI seems about as likely as accidentally ending up with a self-sustaining colony on Mars.
Honestly, this is the part that kills me, no matter how "powerful" the AI becomes, it's still trapped on some form of technology. And even if it never bluescreens, if all else fails there's no shortage of ways to destroy it: simplest way is to cut off the power, whether by switches, unplugging stuff or just cutting the loving power cords.

If you like your destruction natural, just let the weather take care of it: humidity, condensation, a thunderstorm, a flood, a hurricane/tsunami, a tornado, a snowstorm/ice-storm (below -40, most technology stops functioning), hail, a dust storm, or any other weather condition that makes technology poo poo itself (you know, most weather conditions).

If you like your destruction more active, you can use a flamethrower, a giant loving industrial magnet (think the things the pick up cars with), a fire hose attached to water main, C4, a machine gun, liquid Nitrogen, or even just a bunch of dudes with sledge-hammers, among many, many other creative methods of destruction.

AI fears notwithstanding, technology is still extremely fragile, and humans are really, really good at breaking poo poo.:v:

fade5 fucked around with this message at 05:23 on Oct 28, 2014

fade5
May 31, 2012

by exmarx

BobHoward posted:

As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault.
Dude don't go down that road, it's been tried before.

All that'll end up happening is you'll lose some body parts while creating some horrifying, non-human thing. You won't get what you want, and you'll end up worse than when you started. (And you'll end up marked as a human sacrifice in a huge government plot.) Don't do it bro.

BobHoward posted:

Yup. Even more sad: note the disconnect between this and "I must live long enough for the Upload". It implies that he doesn't really believe dad-AI is a meaningful way to live after death, but he can't let himself fully acknowledge it because that would require accepting that his dad is dead forever.
Seriously, it really is amazing just how loving scared people are of death, and the lengths they go to in order to try and avoid/escape it. Just accept it; everyone loving dies. It's an integral part of not just the human condition, but life itself; there's no avoiding it.

fade5 fucked around with this message at 00:06 on Dec 8, 2014

  • Locked thread