Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

SolTerrasa posted:

At the Dartmouth conference, we thought machine vision, the problem of "given some pixels, identify the entities in it" would be solved in a few months. It's been 50 years and we're only okay at it.
We are so not even okay at it. I did experimental work on designing a software that can take a random image and just distinguish the image foreground and background. We failed so loving hard.

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011


Cardiovorax posted:

We are so not even okay at it. I did experimental work on designing a software that can take a random image and just distinguish the image foreground and background. We failed so loving hard.

Not my subfield, but I was being optimistic based on my computer vision class back in school, and my robotics experiments. I got a robot to successfully determine where an object was from image alignment? It had to move, though, and to use multiple pictures of the same thing combined with SLAM (for position determination) from its other sensors, but it did sort of work. Navigated a maze by vision, and all that good stuff.

Actually, I work on Google Maps nowadays, and we use computer vision to great effect. It's one of the reasons we drive those goofy cars everywhere, we can figure out all sorts of stuff from all those pictures.

But yeah, now that I think about it, the difference between those things and what you're talking about is specific vs general CV. For computer vision to work terribly well you do need to make a lot of assumptions. Application-specific CV is currently coming up on adequate, but in general-case I'm not aware of any obvious progress. Of course, I wouldn't know for sure since no one can read every paper, it's not my subfield, and it's only tangentially related to my actual job.

Qwertycoatl
Dec 31, 2008

I'm not an AI expert, but whenever I've looked at AI stuff, it seems to
a) not work much like how Yud seems to think AIs work, and
b) obviously not be the sort of thing that will ever decide to kill all humans

I take it this is a common pattern in AI development?

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Qwertycoatl posted:

I'm not an AI expert, but whenever I've looked at AI stuff, it seems to
a) not work much like how Yud seems to think AIs work, and
b) obviously not be the sort of thing that will ever decide to kill all humans

I take it this is a common pattern in AI development?
Current AI does not show any particular signs of destroying all humans, and I don't think it's likely to; it seems like the field has been colonized by those looking to reconcile religious dreams with ideological atheism.

SolTerrasa
Sep 2, 2011


Qwertycoatl posted:

I'm not an AI expert, but whenever I've looked at AI stuff, it seems to
a) not work much like how Yud seems to think AIs work, and
b) obviously not be the sort of thing that will ever decide to kill all humans

I take it this is a common pattern in AI development?

I think actually that might be understating it.

Man, I'm typing way too much about Big Yud today, but I'll try to make this short.

Shortest possible version: we're nowhere near where Yudkowsky thinks we are, simply because this stuff is hard. He worries about infinitely self-improving AI, but we can't even really make finitely self-improving AI, except in a very basic sense. And infinitely self-improving AI of the sort he worries about are so incredibly far away from even the best we can do today, that there's no great reason to suspect it can be done, let alone that it will be done soon.

The analogy last page about FTL travel is a great one. Your post could read:

"I'm not an expert on rockets, but every time I look into advances in rocketry, it seems to
A) not be very much like a warp drive
B) obviously not be the sort of thing that will ever alert the Klingons to our presence here on earth.

I take it this is a common pattern in space research?"

big scary monsters
Sep 2, 2011

-~Skullwave~-
It's not just that AI isn't nearly as advanced as LWers imagine, but also they're working from a completely different mindset to most mainstream researchers. Their idea seems to be that someone will come up with a Strong AI more or less as a single cohesive project with top-down design; presumably they're pinning their hopes on whole brain emulation or something. This is basically what the early researchers SolTerrasa mentioned imagined - in ten or twenty years they'd have a human level intelligence, likely based on human brain architecture.

This is not however the reality of the majority of modern AI research. It's a big field composed of many very specialised subfields, and communication between different areas isn't always great. They all have their own platforms and languages and ideologies and they aren't necessarily compatible without a lot of extra work. The approach now is to develop unsupervised learning and clever knowledge representation and improved natural language processing and so on, and then one day we'll combine those things into an AGI, somehow. And most people aren't even thinking about that, because the specific problems they are working on are more than hard enough to worry about on their own!

I only really work on the fringes of the field but that's my impression, at least.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



I think they specifically want whole brain emulation so you can become the computer god simulation you are meant to be and get tortured eternally because you didn't have FAITH and make a FAITH GIFT

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

SolTerrasa posted:

Shortest possible version: we're nowhere near where Yudkowsky thinks we are, simply because this stuff is hard. He worries about infinitely self-improving AI, but we can't even really make finitely self-improving AI, except in a very basic sense. And infinitely self-improving AI of the sort he worries about are so incredibly far away from even the best we can do today, that there's no great reason to suspect it can be done, let alone that it will be done soon.
"Hard" isn't even really the issue. The single biggest problem with AI is that we do not, in fact, understand what intelligence even is or where it comes from. We have some competent ideas about which areas of the brain process what kind of data, but how all that comes together to make intelligence, never mind consciousness? Nothing. The human brain is the most complicated information-processing machine ever made. The idea that we can make something equal or better than something of which we have no how idea how it even works is just laughable. AI research basically comprises of randomly floundering in the dark and hoping to accidentally hit on something that's kinda like the only intelligent thing we know of.

Existing "AI" is really as intelligent as a bunch of rocks. Under the hood it's all ultimately based on really simple algorithms. What self-improvement there is universally lies in improving the data set that they work with. None of them can actually improve their own decision-making process.

ArchangeI
Jul 15, 2010
Wait, they seriously based their arguments on a novel? Did the novel at least make some good points regarding the likelihood of an AI being developed in the next X years?

Because I've got a story lying around that proves that developing brain emulation technology is inherently unethical (because you'd have to wipe the test hardware from time to time, thus effectively murdering sentient beings). Just, you know, in case anyone is writing a paper on that.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

And I've got a whole heap of Asimov lying around that proves the three laws he made up are a load of baloney, and also that robots will ultimately lead to all life in the universe being a part of a telepathic hive mind superorganism thing.

Freemason Rush Week
Apr 22, 2006

Cardiovorax posted:

"Hard" isn't even really the issue. The single biggest problem with AI is that we do not, in fact, understand what intelligence even is or where it comes from. We have some competent ideas about which areas of the brain process what kind of data, but how all that comes together to make intelligence, never mind consciousness? Nothing. The human brain is the most complicated information-processing machine ever made. The idea that we can make something equal or better than something of which we have no how idea how it even works is just laughable. AI research basically comprises of randomly floundering in the dark and hoping to accidentally hit on something that's kinda like the only intelligent thing we know of.

Existing "AI" is really as intelligent as a bunch of rocks. Under the hood it's all ultimately based on really simple algorithms. What self-improvement there is universally lies in improving the data set that they work with. None of them can actually improve their own decision-making process.

For me one of the highlights of the Obama administration was when he commissioned further study into the makeup of the human brain. Obviously there's a huge benefit there for medicine and social science research, but I'm also hoping it will help give us a more solid foundation for building the next generation of AI. Some of my favorite courses in school were when we learned about the overlap between cognitive psychology and AI development, and how the cross-pollination was producing new insights for both disciplines.

SubG
Aug 19, 2004

It's a hard world for little things.

Jazu posted:

I want there to be an AI movie where the AI understands its own limitations, but the people making it don't. They keep checking whether it's sending huge amounts of data over the internet that they don't understand, or whether there's some factory full of 3d printers being built in china by a shell corporation, but no. It's just emailing suggestions to scientists. And they're kind of trying to pretend they're not disappointed.
I'd kinda like to see all the convolutions bozos like Yud would end up going through if they somehow or other managed to produce a human-equivalent AI and it turned out to be, I dunno, Barbara Jordan.

PurpleButterfly
Nov 5, 2012

SolTerrasa posted:

I think actually that might be understating it.

Man, I'm typing way too much about Big Yud today, but I'll try to make this short.

Shortest possible version: we're nowhere near where Yudkowsky thinks we are, simply because this stuff is hard. He worries about infinitely self-improving AI, but we can't even really make finitely self-improving AI, except in a very basic sense. And infinitely self-improving AI of the sort he worries about are so incredibly far away from even the best we can do today, that there's no great reason to suspect it can be done, let alone that it will be done soon.

The analogy last page about FTL travel is a great one. Your post could read:

"I'm not an expert on rockets, but every time I look into advances in rocketry, it seems to
A) not be very much like a warp drive
B) obviously not be the sort of thing that will ever alert the Klingons to our presence here on earth.

I take it this is a common pattern in space research?"

Vulcans. Humans don't meet the Klingons until some time after that First Contact event.
:goonsay:

Anyway, this was a very well-put post. I've been following this thread since the beginning, and I really like the current conversations. :)

Iunnrais
Jul 25, 2007

It's gaelic.

ArchangeI posted:

Wait, they seriously based their arguments on a novel? Did the novel at least make some good points regarding the likelihood of an AI being developed in the next X years?

Because I've got a story lying around that proves that developing brain emulation technology is inherently unethical (because you'd have to wipe the test hardware from time to time, thus effectively murdering sentient beings). Just, you know, in case anyone is writing a paper on that.

Do you mean this one? I really enjoyed that one, I think it's a good philosophical sci-fi short story, the author's disclaimer of it sucking notwithstanding.

ArchangeI
Jul 15, 2010

Iunnrais posted:

Do you mean this one? I really enjoyed that one, I think it's a good philosophical sci-fi short story, the author's disclaimer of it sucking notwithstanding.

By "lying around" I meant "I have written and published", but thanks for the link anyway. It's a good story that makes a few very interesting points that I didn't consider when writing mine, like the AI be willing to give up its hardware for newer versions, in a sense like people give up their lives for their children. Personally, the question of how society would deal with that sort of technology, how it would transform our idea of what "human" is, particularly in terms of universal human rights, is so much more interesting than "What do I do when I'm immortal and get bored with my catgirl harem?" or "If an all-powerful, all-knowing AI is created, will it simulate and torture us?"

Iunnrais
Jul 25, 2007

It's gaelic.

ArchangeI posted:

By "lying around" I meant "I have written and published", but thanks for the link anyway.

Oh, awesome! I don't suppose it's published online anywhere I could read? I love these kinds of technical/social sci-fi works.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

ArchangeI posted:

By "lying around" I meant "I have written and published", but thanks for the link anyway. It's a good story that makes a few very interesting points that I didn't consider when writing mine, like the AI be willing to give up its hardware for newer versions, in a sense like people give up their lives for their children. Personally, the question of how society would deal with that sort of technology, how it would transform our idea of what "human" is, particularly in terms of universal human rights, is so much more interesting than "What do I do when I'm immortal and get bored with my catgirl harem?" or "If an all-powerful, all-knowing AI is created, will it simulate and torture us?"
Seriously. If we ever develop real AI, we would have effectively created a new intelligent species. It'd probably completely transform how we, as a culture, think of ourself, intelligence, personhood and possibly even things like domestication and pet ownership. The implications are seriously staggering. Super-intelligent robot gods doing whatever is space opera by comparison.

My hope is that if I ever created an intelligent being, I'd feel about it like I would about any child. :unsmith:

ArchangeI
Jul 15, 2010

Iunnrais posted:

Oh, awesome! I don't suppose it's published online anywhere I could read? I love these kinds of technical/social sci-fi works.

It's out for kindle on amazon, but if you send me a message at robertdreyerhro[at]gmail.com, I can send you a copy.

Centripetal Horse
Nov 22, 2009

Fuck money, get GBS

This could have bought you a half a tank of gas, lmfao -
Love, gromdul

ArchangeI posted:

By "lying around" I meant "I have written and published", but thanks for the link anyway. It's a good story that makes a few very interesting points that I didn't consider when writing mine, like the AI be willing to give up its hardware for newer versions, in a sense like people give up their lives for their children. Personally, the question of how society would deal with that sort of technology, how it would transform our idea of what "human" is, particularly in terms of universal human rights, is so much more interesting than "What do I do when I'm immortal and get bored with my catgirl harem?" or "If an all-powerful, all-knowing AI is created, will it simulate and torture us?"

Have you read Crystal Nights by Greg Egan? That story also deals with the ethics of developing artificial intelligence, specifically touching on what you mentioned. It's my favorite artificial intelligence story of all time, with the possible exception of Press Enter by John Varley.

SolTerrasa
Sep 2, 2011


Cardiovorax posted:

"Hard" isn't even really the issue. The single biggest problem with AI is that we do not, in fact, understand what intelligence even is or where it comes from. We have some competent ideas about which areas of the brain process what kind of data, but how all that comes together to make intelligence, never mind consciousness? Nothing. The human brain is the most complicated information-processing machine ever made. The idea that we can make something equal or better than something of which we have no how idea how it even works is just laughable. AI research basically comprises of randomly floundering in the dark and hoping to accidentally hit on something that's kinda like the only intelligent thing we know of.

Existing "AI" is really as intelligent as a bunch of rocks. Under the hood it's all ultimately based on really simple algorithms. What self-improvement there is universally lies in improving the data set that they work with. None of them can actually improve their own decision-making process.

You're totally right regarding "hard" being a nonsense word in this case. Work is pretty consistently chill about me posting to the forums, but I've gotta get something done sometime, so I oversimplified.

I think I disagree with you on two points, though, now that I have a few minutes. As a dude who did/does some AI research, I don't really feel like I'm floundering in the dark. AGI (artificial GENERAL intelligence, the thing that most people think of when they think of AI) research is probably exactly like that (and what Big Yud is doing is CERTAINLY like that), but research into specific subfields, where you have concrete goals to accomplish and you know if you got it right (as in, "this system successfully translated English to Spanish", or "navigated a maze" or "detected the presence of a cat in a photograph", etc) can be much more sane. And that's what we do, really. Almost nobody is working on AGI research, because stumbling around in the dark for 50 years is really unrewarding.

Second thing is that no existing AI can improve decision-making. They can, but it's really, really crude and elementary. The best I've seen thus far is a system that, when given basic actions like "move east" and "open door", can consistently combine these actions into what we call "macros" (for instance, "enter the room behind the northmost door"). This is done in order to improve its own planning algorithm, since shorter plans are easier to compute.

But I think those are quibbles, because you were probably talking about AGI research when you said AI research, and macros as self-improvement is debatable.

In any case, you used two words I really like hearing debate about, and you seem like an intelligent/informed internet-person; what do you think it would mean for an AGI to be "intelligent" or "conscious"? Usually we use the former word to mean "accomplishes its goals quickly and efficiently" (so existing systems count) and we never use the latter word at all.

E: but to me, it doesn't mean anything. It's one of those "I know it when I see it" sorts of things, and setting out to do top-down design of an AI system with the specification "should be conscious" is like trying to do web design for one of those terrible clients who "just wants it to pop!". Except you can't, because it's 1850, and by the time you've invented transistors and computers and TCP, worrying about the website just seems stupid.

That analogy got away from me, I mean to say that I don't think "consciousness" means anything in particular, and I think that a "conscious" AI wouldn't be notably different from a normal one. I predict that by the time we have/if we ever have solved all the problems that we'll need to solve to get AGI, "consciousness" will not be one of them.

SolTerrasa fucked around with this message at 18:51 on Aug 21, 2014

ArchangeI
Jul 15, 2010
Personally, I would argue that a "conscious" AI would be aware of its own existence as an entity separate from the world around it, i.e. a Ghost in the Machine. Which is probably ridiculously hard to make into a coherent scientific criteria you can test against, so you can't fault AI researchers for not considering it a research goal, especially since it's probably not something you can develop gradually. Nothing is ever "sort of" conscious.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

SolTerrasa posted:

I think I disagree with you on two points, though, now that I have a few minutes. As a dude who did/does some AI research, I don't really feel like I'm floundering in the dark. AGI (artificial GENERAL intelligence, the thing that most people think of when they think of AI) research is probably exactly like that (and what Big Yud is doing is CERTAINLY like that), but research into specific subfields, where you have concrete goals to accomplish and you know if you got it right (as in, "this system successfully translated English to Spanish", or "navigated a maze" or "detected the presence of a cat in a photograph", etc) can be much more sane. And that's what we do, really. Almost nobody is working on AGI research, because stumbling around in the dark for 50 years is really unrewarding.

Second thing is that no existing AI can improve decision-making. They can, but it's really, really crude and elementary. The best I've seen thus far is a system that, when given basic actions like "move east" and "open door", can consistently combine these actions into what we call "macros" (for instance, "enter the room behind the northmost door"). This is done in order to improve its own planning algorithm, since shorter plans are easier to compute.

But I think those are quibbles, because you were probably talking about AGI research when you said AI research, and macros as self-improvement is debatable.
Right, when I said AI I specifically meant AGI in this case. I agree with you, specific AI like pathfinding or translation is comparatively easy and can be approached systematically perfectly well, although this is, in my opinion, primarily a result of the goal being very narrowly circumscribed and ultimately simple. The more you generalize it, the more you realize that, as it stands, you simply don't have the required information to make informed judgments on which approach to pursue.

Macros are part of what I mean when I say "data set." It improves its results by having a larger set of instructions to choose from, but it can't actually critically examine and modify its own decision-making algorithm. In my own work, an equivalent situation would be to supervise the learning of an image recognition software as it tries to recognize a pattern in a collation of images and telling it whenever it gets something right. It accumulates a larger and larger repository of which constellations of pixels are "good," but the underlying mechanics do not change.

SolTerrasa posted:

In any case, you used two words I really like hearing debate about, and you seem like an intelligent/informed internet-person; what do you think it would mean for an AGI to be "intelligent" or "conscious"? Usually we use the former word to mean "accomplishes its goals quickly and efficiently" (so existing systems count) and we never use the latter word at all.

E: but to me, it doesn't mean anything. It's one of those "I know it when I see it" sorts of things, and setting out to do top-down design of an AI system with the specification "should be conscious" is like trying to do web design for one of those terrible clients who "just wants it to pop!". Except you can't, because it's 1850, and by the time you've invented transistors and computers and TCP, worrying about the website just seems stupid.

That analogy got away from me, I mean to say that I don't think "consciousness" means anything in particular, and I think that a "conscious" AI wouldn't be notably different from a normal one. I predict that by the time we have/if we ever have solved all the problems that we'll need to solve to get AGI, "consciousness" will not be one of them.
Flattery will get you everywhere with me. ;-* Yes, that's one of those hard questions and I agree with you. Consciousness is not a well-defined term to begin with, which makes researching it even more difficult than it already is. I think it's a required term if we really want to understand the human mind, though.

And yeah, you got me basically right there. When I say intelligence, I am talking about a generalized ability to solve complex problems correctly, process information and critically examine and improve your own decision-making process. The lack of the latter ability is why I put conventional AI into air quotes just now. This kind of intelligence may or may not require a sophisticated model of itself and may or may not require consciousness. I do not think we have sufficient information to decide, but given that we can already successfully imitate isolated functions of our own mind, I'm fairly confident we could make a fairly strong AGI with needing to give it consciousness. On the other hand, we know of nothing comparably intelligent to a human mind that does not have consciousness. It's a hypothetical at this point.

Consciousness itself is, as you said, very badly defined. I can only call it that intangible sense of "I exist," the sense of self-awareness humans have which tells them that they have a unique perspective and exist separately from their environment. It's hard to express meaningfully without waxing poetic because it's so subjective. How can you really, unambiguously tell that any other intellect is really conscious? You only know you are because you experience yourself as having consciousness. Whatever it is, it undeniably exists and is arguably separate from intelligence, given that all humans are, to our best knowledge, equally conscious without being nearly equally intelligent. It may or may not be a quality that emerges spontaneously once a mind becomes sufficiently complex and it may or may not be a quality of everything that has a brain bigger than a pinhead. We just don't know.

Master of None
Nov 4, 2011

ArchangeI posted:

Personally, I would argue that a "conscious" AI would be aware of its own existence as an entity separate from the world around it, i.e. a Ghost in the Machine. Which is probably ridiculously hard to make into a coherent scientific criteria you can test against, so you can't fault AI researchers for not considering it a research goal, especially since it's probably not something you can develop gradually. Nothing is ever "sort of" conscious.

Windows Task Manager is aware of it's own existance (it knows about it's own process), but I don't think anyone would say that makes it "conscious".

SolTerrasa
Sep 2, 2011


Cardiovorax posted:


Macros are part of what I mean when I say "data set." It improves its results by having a larger set of instructions to choose from, but it can't actually critically examine and modify its own decision-making algorithm.


Formally, the action set of a planning system is a component of the algorithm, so it does technically count as modifying the decision-making. But that's what I meant when I said it's debatable, it depends on whether you're using formal language or just talking to somebody on the internet.

Cardiovorax posted:

Consciousness itself is, as you said, very badly defined. I can only call it that intangible sense of "I exist," the sense of self-awareness humans have which tells them that they have a unique perspective and exist separately from their environment. It's hard to express meaningfully without waxing poetic because it's so subjective. How can you really, unambiguously tell that any other intellect is really conscious? You only know you are because you experience yourself as having consciousness. Whatever it is, it undeniably exists and is arguably separate from intelligence, given that all humans are, to our best knowledge, equally conscious without being nearly equally intelligent. It may or may not be a quality that emerges spontaneously once a mind becomes sufficiently complex and it may or may not be a quality of everything that has a brain bigger than a pinhead. We just don't know.

This was interesting, thanks. I don't think we agree, but I admit that it is literally impossible to have proof of any sort here, so the only possible discussion is purely theoretical and based on, what's the word, qualia? Subjective experience of life. Anyway, the trouble is that I can write a really simple bit of code to fulfill any technical definition anyone can give (someone will undoubtedly have beat me to the "program which queries its own PID" example demonstrating uniqueness and distinction from the environment), and so I don't think it's even worth mentioning as a constraint on AGI at all, since it doesn't *mean* anything. To me, you might as well ask "does the AGI have a soul?".

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

SolTerrasa posted:

This was interesting, thanks. I don't think we agree, but I admit that it is literally impossible to have proof of any sort here, so the only possible discussion is purely theoretical and based on, what's the word, qualia? Subjective experience of life. Anyway, the trouble is that I can write a really simple bit of code to fulfill any technical definition anyone can give (someone will undoubtedly have beat me to the "program which queries its own PID" example demonstrating uniqueness and distinction from the environment), and so I don't think it's even worth mentioning as a constraint on AGI at all, since it doesn't *mean* anything. To me, you might as well ask "does the AGI have a soul?".
"Possessed of qualia" is quite possibly a meaningful definition of consciousness, so I'm not going to argue there. But then again so is "has a soul," so I this doesn't really lead anywhere either. I suppose there's a reason why smarter people than you and me ran up against this problem and couldn't answer it.

Iunnrais
Jul 25, 2007

It's gaelic.
Human beings have a part of their brain dedicated to simulating other people, letting us imagine how another person might react to certain stimulus. This part of the brain is notable for not being picky about who gets simulated there... fictional characters are notorious for becoming "real" and fighting back with their authors once said character becomes complex enough to enter this part of the brain.

I think what people are really looking for is an AI that is sufficiently advanced that they can stick said AI into this part of their brain. Because anything too simple gets rejected and your brain won't accept them as a "real person". I think that if any AI is complex enough to be entered into this part of the human brain, people will generally FEEL, emotionally, that the AI is "conscious".

It's as fair a metric as any, I think.

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

You realize you can do that with a robot programmed to ask you not to turn it off, right? There was an experiment done with that, where at the end you had to turn it off but it was made to tell you that it would lose all of its memories of you and your interactions with it when you did, and people got really distressed about having to turn it off.

The robot was not actually a person in any sense.

Iunnrais
Jul 25, 2007

It's gaelic.
I'd argue that even when distressed and not wanting to turn the robot off, they haven't really had the robot be fully "real" in their heads. Not in the sense that they could, for example, simulate a full conversation on a variety of topics with the robot in their heads. The trick is sufficient complexity to be a fully fledged character, like in a book.

Research suggests that you can only fit about 150 people into this part of the brain anyway. See: Dunbar's Number. So there is a threshold of complexity... you'd hesitate to shoot someone you say hello to in the store too, but they aren't "real" to you yet. Not until you "get to know them".

But again, I think that this is simply what people want to have happen when they say they want an AI that is conscious. They want something with sufficient complexity that they can "get to know" and contribute to their personal "Dunbar's Number".

Tunicate
May 15, 2012

Dunbar's number is just a piece of pseudoscience anyway.

SolTerrasa
Sep 2, 2011


Tunicate posted:

Dunbar's number is just a piece of pseudoscience anyway.

Yeah, that's made up. This is where the problem is, you see. Everybody THINKS they know something about what these big impossible-to-define concepts mean and about how AI should work because, after all, everyone has experience with "being intelligent", right? It's the bikeshed problem, it's what makes Big Yud's work so hilarious to me. The only way to know things about AI for real is to build something.

The thing is, I actually liked that first idea you had, I liked the explanation that people would consider an AI sentient/conscious if it acted in a way that would allow them to predict it via simulation. But you have to know where to stop, or you go off and develop crazy theories and become the next Big Yud.

The whole "tribe size" deal is conjecture at best. I'm not aware of any research specifically disproving it, but that's not where the burden of proof is for :biotruths: claims. IIRC, even the research on mirroring isn't conclusive (in the same way that research about brains is always fiddly).

The best thing that a responsible person can say about "consciousness" or whatever is "we don't know, but here's what I think". It's hard to even say "and here's the research which is consistent with my opinion" because as far as I know there really isn't any research worth the name into this sort of thing, so anything that gets thrown in is usually post-hoc rationalization, of the "I thought of this cool thing, oh look here's science that makes it look truer" sort.

That said, again, I actually really like your idea and I haven't heard it before. If someone cleaned the iffy-science goop off of it I honestly wouldn't be surprised to see it at AAAI or something, as background for some new agent-modelling paper or something. Hell, as far as I know it's what the KISMET people thought. Not my subfield.

E:

Cardiovorax posted:

"Possessed of qualia" is quite possibly a meaningful definition of consciousness, so I'm not going to argue there. But then again so is "has a soul," so I this doesn't really lead anywhere either. I suppose there's a reason why smarter people than you and me ran up against this problem and couldn't answer it.

In what sense is that a meaningful definition? I got the impression that you're a smart person who works in research / tech / a tech-adjacent field, and it was a long day at work, so probably I'm reading this wrong. Maybe you were agreeing with me that the best answer is to throw our hands in the air and declare the working definition of "know it when I see it" correct enough for now, albeit useless.

But if I am reading it right, then that definition just punts the question to a new word; what's "qualia"? How do you know if something has it? And why isn't the PID-querying program possessed of one?

SolTerrasa fucked around with this message at 03:03 on Aug 22, 2014

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

SolTerrasa posted:

In what sense is that a meaningful definition? I got the impression that you're a smart person who works in research / tech / a tech-adjacent field, and it was a long day at work, so probably I'm reading this wrong. Maybe you were agreeing with me that the best answer is to throw our hands in the air and declare the working definition of "know it when I see it" correct enough for now, albeit useless.

But if I am reading it right, then that definition just punts the question to a new word; what's "qualia"? How do you know if something has it? And why isn't the PID-querying program possessed of one?
I've got a compsci degree and worked in the field for a few years, now I study chemistry. That doesn't really qualify me to have nuanced opinions on philosophy of mind, but there you go. v:v:v

But yeah, I was really just agreeing with you in so many words. Without a way to define qualia or consciousness more precisely than "it's something people have" it's just not something you can usefully work with. I've read somewhere that it's the difference between knowing what "warm" feels like and being able to measure temperature, but that doesn't really help a lot.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy
As somebody who had a serious interest in Philosophy of Mind and ended up moonwalking out of the field in college because of how hosed the discussion around Qualia gets you really don't want to go down this particular rabbit hole.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out
I think the field with the most solid grasp on qualia is neuroscience. I liked Patricia Smith Churchland's most recent book (Touching a Nerve: The Self as Brain) a lot, though.

AlbieQuirky fucked around with this message at 06:29 on Aug 23, 2014

Chamale
Jul 11, 2010

I'm helping!



I feel like I'd be more inclined to accept an AI as conscious if its conscious-like behaviour was purely emergent and not the result of an active attempt to make a conscious AI. It's easy to fake consciousness, but if you made some kind of complex evolving algorithm that started independently trying to "explore" or "communicate" I'd be more likely to see that as consciousness. But maybe a fake consciousness is also more likely to emerge from an algorithm than a real consciousness. The only evidence I have that other people are conscious and not philosophical zombies is that I'm conscious, and it's irrational to assume that other people who are fundamentally similar to me are not.

Freemason Rush Week
Apr 22, 2006

This is why I think studying the brain is so important; we don't really "get" consciousness yet, and I don't think we will until we have a better understanding of the mechanisms which produce it in ourselves. One of the reasons why people got the timeline for AI development wrong is because they underestimated just how complicated cognition is in the first place.

LazyMaybe
Aug 18, 2013

oouagh

tonberrytoby posted:

There is a difference between unrealistic and useless. Graham's Number is called the largest useful number. And it is large enough that it can't be expressed directly by the Knuth's arrow notation.
Late as gently caress, but wanted to say that the wiki article on Graham's number says-

quote:

Even power towers of the form -snip- are insufficient for this purpose, although it can be described by recursive formulas using Knuth's up-arrow notation or equivalent, as was done by Graham.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

AlbieQuirky posted:

I think the field with the most solid grasp on qualia is neuroscience. I liked Patricia Smith Churchland's most recent book (Touching a Nerve: The Self as Brain) a lot, though.

I happen to agree with her quite a lot and the field would be a lot more bearable if more people adopted the Churchlands' approach of actually using physical science.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
That's an unreasonable demand and you know it. Can you imagine how many poor philosophers would suddenly be unemployed? All of them, that's how many.

Lamprey Cannon
Jul 23, 2011

by exmarx

IronicDongz posted:

Late as gently caress, but wanted to say that the wiki article on Graham's number says-

The post you're quoting talks about writing it directly in terms of up-arrow notation, which you can't. The wikipedia article says you can define it recursively using the up arrow notation. g1 can be written using up arrow notation, but anything beyond that is going to require more arrows than there are atoms in the universe.

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

IronicDongz posted:

Late as gently caress, but wanted to say that the wiki article on Graham's number says-

Knuth's up-arrow notation lets you write numbers like 3^^^^3, or 3^^^^^^^^3, or 7^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^7. It doesn't let you write Graham's Number, because Graham's Number is too big. Graham's Number looks like this:



First, you use up-arrow notation to get a number. Then you take a number of up arrows equal to that number and do up arrow notation again. Then you take a number of up arrows equal to that number and do up arrow notation again. Then you take a number of up arrows equal to that number and do up arrow notation again. And so on. After you've done this sixty-four times, you finally have Graham's Number.

You can use recursive iterations of up-arrow notation to reach Graham's Number, but you can't actually write Graham's Number in up-arrow notation directly. To take a simpler example: you can't write a number as big as a googol in the form 2+2+2+2+..., because a googol is too big and there aren't enough atoms in the universe to write out all the 2+'s. But you can reach a googol in the form 2^(2^(2^...)), and exponentiation is like repeated multiplication, and multiplication is like repeated addition, so that's kind of like being able to write a googol just by adding a bunch of 2's together, right? No, not really.

Except it really isn't. The same principle applies here: you can write Graham's Number through a method that involves up-arrow notation, but you can't write it directly using up-arrow notation.

Thank you for posting a wiki article that agrees with a three-month-old post I guess?

Here's a question that Yudkowsky thinks a lot of people are asking and is worth serious consideration:

quote:

Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

Yudkowsky's answer is 6000 words about the industrial revolution and the GDP and speculations about macroeconomics that he admits he doesn't actually know anything about. It seems to me like a simpler answer would be "What advances in Artificial Intelligence?", since by the way Yudkowsky defines AI there haven't been results worth mentioning (other than the super-dangerous ones that MIRI is nobly suppressing).

  • Locked thread