|
His Divine Shadow posted:A symbolic, rational AI or AGI which stands for an Artificial General Intelligence, could possibly be quite simple in terms of programming, and would likely be much more capable of self-rewriting and self-understanding from the get go, in fact it might be so smart as to hide its true abilities from its developers and we'd never know it until it was too late. But such a design would also make it more likely to have built in friendliness-function that wouldn't get written out or corrupted. You're right though, this is a fun quote: quote:It's already implausible enough that the first AGI project to succeed is taking the minimum sensible precautions. The notion that access to the system will be so restricted is a fantasy. You merely have to imagine the full range of 'human engineering' techniques that existing hackers and scammers employ, used carefully, relentlessly and precisely on every human the system comes into contact with, until someone does believe that yes, by taking this program home on a USB stick, they will get next week's stock market results and make a killing. You can try and catch that with sting operations, and you might succeed to start with, but that only tells you that the problem exists, it does not put you any closer to fixing it.
|
# ? May 13, 2015 17:58 |
|
|
# ? May 2, 2024 17:59 |
|
His Divine Shadow posted:This type of AI is a system that's meant to understand and improve itself from the ground up as well as being deterministically understandable (connectivist and emerging AI meanwhile is basically adding complexity to the system until it looks like we got real intelligence emerging from it), this will make for a more reliable friendlyness coding, though its really hard to say how things will turn out once such a thing becomes a reality. Like I said, it's possible that an algorithm like this exists, but we don't know one and a lot of people don't think it's possible. For an AI to be self-understanding would mean it does computation on representations and that we can determine what those representations are by examining it (or we already know). A lot of people don't think you can get to AI by just by doing computations on representations because in a humanlike brain, features of any fact can be relevant to any fact and features of the whole body of knowledge can be relevant to any specific fact -- a system built around super local representations, even if you use tricks like making each fact name other relevant facts, ultimately seems to fail e.g. because there's no relevance-caching strategy that can't be broken by changes in the relevance rules. (Philosophy of AI dudes call this the 'frame problem.') In addition to that, the actual time constraints indicate not a lot of neurons are actually hit when a real human responds rapidly to a question. We really don't know any kind of computation over representations that's very good at dealing with this kind of problem -- especially not one that works under brainlike constraints (things like "thou shalt not perform more than 200 dereferences") --- and we've got a lot of evidence suggesting that we don't actually need to do computation over representations to solve the same problem, and that existing solutions don't. I'm kind of skeptical of any "computation over representations will totally do it" claims that don't come with an algorithm even though I don't think "neural nets!" is the whole answer because I've never seen anyone solve the frame problem with them. I think neural nets are just a pretty good example of a system with the kind of properties that are probably necessary to subserve cognition. ("Connectionist" is the word I've usually heard fwiw but I don't know for sure..)
|
# ? May 13, 2015 18:44 |
|
itstime4lunch posted:(The decline of population growth might actually have more to do with us finally running up against the physical limits of our environment...) Isn't population growth these days pretty much inversely proportional to the amount of resources a society has at its disposal?
|
# ? May 13, 2015 18:52 |
|
Making a go at analogy and hoping chemical/petro/mechanical engineers don't kill me. What you're (Shadow) saying reads to me kind of like appealing to "yes, but when they invent clean gas engines, we're going to turn the environment around!" because while I guess it's hypothetically possible we can do something that makes burning gas cleaner, we don't really have a super good way of doing that AFAIK and we already know that other power sources are cleaner-per-watt. It gets to the point where if you appeal to "clean gasoline" you've already given yourself another thing to explain.
|
# ? May 13, 2015 18:54 |
|
Doctor Malaver posted:Isn't population growth these days pretty much inversely proportional to the amount of resources a society has at its disposal? Um, no? How is this even possible?
|
# ? May 14, 2015 03:16 |
|
nelson posted:Um, no? How is this even possible? http://en.wikipedia.org/wiki/Demographic-economic_paradox
|
# ? May 14, 2015 03:23 |
|
nelson posted:Um, no? How is this even possible? People don't actually want to have as many children as possible.
|
# ? May 14, 2015 06:11 |
|
Doctor Malaver posted:Isn't population growth these days pretty much inversely proportional to the amount of resources a society has at its disposal? Good point.
|
# ? May 14, 2015 07:44 |
|
nelson posted:Um, no? How is this even possible? The more education women have the less they want to pop out kids
|
# ? May 14, 2015 10:11 |
|
Oh I see. I read the statement literally, as in "If you have zero food you will have infinite population growth."
|
# ? May 14, 2015 21:32 |
|
nelson posted:Oh I see. I read the statement literally, as in "If you have zero food you will have infinite population growth." I imagine this is mostly true. Not with literally zero food, but even a subsistence farmer will benefit from specialization and economies of scale by having more children that grow more food.
|
# ? May 14, 2015 21:36 |
|
I felt that this was worth linking: Slate Star Codex posted about some actual AI researchers' views on AI risk. It's obviously not an argument to believe everything you read on LW, but it should suffice to show that not being able to tell ELIZA apart from HAL9000 is not a prerequisite for thinking that it's worthwhile to spend some resources on getting artificial general intelligence right. And, more importantly, that the argument is not about banning AI research (especially as all that that would accomplish is only allow research to proceed in places that have less concern for safety!), it's about how much resources should be spent on the problem of getting an artificial general intelligence to have values that accord with those of humanity. There's plenty of room to disagree on how likely you think that an intelligence explosion is possible in the first place, on how fast it could be if it is, not to mention when it might happen, while still having friendly AI be a research topic worth some funding from society.
|
# ? May 22, 2015 15:43 |
|
Just a heads up, Ex Machina is a pretty good movie that explores this subject.
|
# ? May 22, 2015 17:03 |
|
http://svedic.org/ I think this is an interesting take on the subject. Zel posted:... It isn't, in the author's opinion.
|
# ? Jun 10, 2015 21:07 |
|
We are human. Lots of people would try to control it. Syfy movies would be made about quests to control it.
|
# ? Jun 11, 2015 01:45 |
|
I'm somewhat surprised nobody has brought up Roko's Basilisk yet (or maybe i just missed it) http://rationalwiki.org/wiki/Roko%27s_basilisk
|
# ? Jun 11, 2015 20:36 |
|
I think that was in the lesswrong mock thread. There's a link in here somewhere...
|
# ? Jun 11, 2015 21:36 |
|
|
# ? May 2, 2024 17:59 |
|
a shameful boehner posted:I'm somewhat surprised nobody has brought up Roko's Basilisk yet (or maybe i just missed it) That's because this thread is for semi-serious discussion, and Roko's Basilisk is almost universally agreed to be asinine.
|
# ? Jun 12, 2015 01:35 |