Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
reignonyourparade
Nov 15, 2012
The actual thought experiment is straight up supposed to be biologically indistinguishable down the last atom, all the same neurons firing taking the same actions, it's frankly a very stupid thought experiment.

Adbot
ADBOT LOVES YOU

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

Poor beetles. But this brings us back to the pet example, sort of. I, with my human senses and human brain, am more likely to empathize with cats and dogs than with beetles, because cats and dogs physically resemble humans and beetles less so. So I think a cat or dog I meet has a consciousness sort of like mine, they can reason out some things and understand social cues (though in the case of dogs they've been engineered by humans to do so), but ultimately what I am doing is defining consciousness as something mammals (appear to) do when observed and interacted with.
I think this is still a categorical error. I don't think a beetle wanting to gently caress a beer bottle tells us anything about the fundamental properties of beer bottles. The properties of the beer bottle aren't unrelated to the beetle's behaviour, but I think the beetle's behaviour tells us more about the nature of a beetle's ideas about fuckability than about the nature of beer bottles.

Similarly, I don't think it's clear that "intelligence" and "consciousness" are inherent properties of objects or systems, as opposed to being a statement about how humans socially interact with things. More like the concept of "funny" or "handsome" than "weighs about 70 kilograms" or "is about a meter and a half tall".

Rappaport
Oct 2, 2013

SubG posted:

I think this is still a categorical error. I don't think a beetle wanting to gently caress a beer bottle tells us anything about the fundamental properties of beer bottles. The properties of the beer bottle aren't unrelated to the beetle's behaviour, but I think the beetle's behaviour tells us more about the nature of a beetle's ideas about fuckability than about the nature of beer bottles.

Similarly, I don't think it's clear that "intelligence" and "consciousness" are inherent properties of objects or systems, as opposed to being a statement about how humans socially interact with things. More like the concept of "funny" or "handsome" than "weighs about 70 kilograms" or "is about a meter and a half tall".

I would agree that there is no Donald Duck magic device that tells us how intelligent or consciousness-having something is. My computer can do integrals better than I can, provided the proper software, and I can only do integrals since I was provided with a hefty education, so which of us is more intelligent? The computer is more efficient, certainly. But my computer doesn't possess a consciousness.

I think the question that started this line of discussion was, what'd make people, even educated ones, believe an AI had a consciousness. My hypothesis is that it would require human-like characteristics, because human brains themselves are wired to think of certain things as more "like" us and others less so. Of course we can point at obverse examples such as chattel slavery based on racism, but on the whole I would still contend that an AI that was wholly alien to our way of thinking would seem "less" conscious than something that behaved in a manner we humans expect from each other, or from mammals, and so forth. Of course I am speculating based on my own, limited way of thinking, but so is everyone else. Unless there's some polling info out there, of course!

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

I think the question that started this line of discussion was, what'd make people, even educated ones, believe an AI had a consciousness. My hypothesis is that it would require human-like characteristics, because human brains themselves are wired to think of certain things as more "like" us and others less so.
If the question is "what would make ('educated') people believe an AI is conscious (regardless of whether this is true or not)", then it's really just a question about why people believe things in general. What makes people believe Eskimos have an enormous number of words for snow? Why do people believe microwaves cook food "from the inside out"? Why do people believe in a God or gods? I don't think the answers to any of these questions tells us anything about the properties of the things believed in, except in what they tell us about the believer and the believer's environment.

Rappaport
Oct 2, 2013

SubG posted:

If the question is "what would make ('educated') people believe an AI is conscious (regardless of whether this is true or not)", then it's really just a question about why people believe things in general. What makes people believe Eskimos have an enormous number of words for snow? Why do people believe microwaves cook food "from the inside out"? Why do people believe in a God or gods? I don't think the answers to any of these questions tells us anything about the properties of the things believed in, except in what they tell us about the believer and the believer's environment.

Yes, this is true. If Albert Einstein tells Marilyn Monroe that the Moon is made of cheese, or sand, it's just meaningless exchange of words and does not change the nature of the Moon. But the point is, it was physically possible, unless we believe in extremist opinions, to land on the Moon and find out what the regolith actually was composed of. We can't do that with intelligence or consciousness, there is no fluorescence test for how consciousness-having something is. That is my point. And I think it is yours too, in a round-about way. If consciousness is a property inherent to systems like chemical composition is to lunar regolith, then we have no means of testing for it. It's all just semantics and word-play, which ultimately amount to systems of belief.

Unless you are proposing some method of testing for consciousness, that isn't the Turing test, I'm sure this thread is all ears.

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

[T]here is no fluorescence test for how consciousness-having something is. That is my point. And I think it is yours too, in a round-about way. If consciousness is a property inherent to systems like chemical composition is to lunar regolith, then we have no means of testing for it. It's all just semantics and word-play, which ultimately amount to systems of belief.
My point isn't that intelligence is a physical property we just can't test for, it's that we don't have any particular reason to believer it's a physical property in the first place. "Beer bottle fuckability" isn't a property of beer bottles, it's a characteristic of beetle behaviour.

Rappaport
Oct 2, 2013

Shifting this to intelligence was maybe my fault, but it does seem like more of an inherent property. I don't mean IQ tests or racist poo poo like that, just that human intelligence, while not directly measurable per se, does seem to be a partly physical thing. Obviously the amount of education a person receives, how many books they read for leisure, etc., have an effect on their perceived intelligence, but is is fairly plain to see that it is easier to teach some things to some people than others. The ability to absorb and utilize information differs between human individuals. And to speak to its physicality, we know that for example malnourished children have a harder time learning things, because their body is less ready to absorb information. Just because there isn't an easy-to-do Star Trek mind scan thing to show this person is more intelligent than this person doesn't mean that intelligence isn't tied to physical reality. Hell, a lobotomy procedure is based on the idea of intelligence being tied to physical reality, as gruesome as that is.

Of course we can just say that intelligence as a concept is mindless or meaningless, and is just a social convention that can be discarded. This doesn't bring us any closer to answering the question about how people would perceive consciousness, though.

Serotoning
Sep 14, 2010

D&D: HASBARA SQUAD
HANG 'EM HIGH


We're fighting human animals and we act accordingly

Tree Reformat posted:

This is actually one of the best arguments why, physical and practical limitations aside, AGI research should not be pursued under any circumstances: to do so would be effectively the research and development of artificially produced slaves.

This ties nicely into the broader discussion being had here because it shows just how strangely overlapping terms like "intelligence", "consciousness" and so on are; and how tempted we are as humans to tightly pack them because they all seem to occur at once in us.

I don't think a conscious AGI would inherently be a slave because an awareness of internal existence (what we generally consider consciousness) does not imply volition. Most if not all animals for example I believe have an experience (it is like something to be a bat or a dog) but don't have "will" per se, they rather just grab the next piece of instinct and execute it. Free will (or at least will period, whether or not it is truly "free") is something that is probably unique to humans IMO and NOT something that will necessarily come along with consciousness. Of course I look forward to someone telling me I'm wrong and that dogs are constantly filling and flushing the contents of their minds with the plushies they can never hump and the snacks they can never have.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.

Rappaport posted:

Shifting this to intelligence was maybe my fault, but it does seem like more of an inherent property. I don't mean IQ tests or racist poo poo like that, just that human intelligence, while not directly measurable per se, does seem to be a partly physical thing. Obviously the amount of education a person receives, how many books they read for leisure, etc., have an effect on their perceived intelligence, but is is fairly plain to see that it is easier to teach some things to some people than others. The ability to absorb and utilize information differs between human individuals. And to speak to its physicality, we know that for example malnourished children have a harder time learning things, because their body is less ready to absorb information. Just because there isn't an easy-to-do Star Trek mind scan thing to show this person is more intelligent than this person doesn't mean that intelligence isn't tied to physical reality. Hell, a lobotomy procedure is based on the idea of intelligence being tied to physical reality, as gruesome as that is.

Of course we can just say that intelligence as a concept is mindless or meaningless, and is just a social convention that can be discarded. This doesn't bring us any closer to answering the question about how people would perceive consciousness, though.

I think it's obvious that intelligence is a physical property because humans are smarter than dogs. If the exact structure of the brain had no effect on this then this wouldn't make much sense.

Bar Ran Dun
Jan 22, 2006




There is the metaphor of a shoreline. The ocean is a definable material thing. The land is a definable material thing. The shore is an intersection. It’s an observable thing, but dynamic and changing. It might not even be in the same place the next time.

Consciousness might be emergent from the interaction and intersecting of complex systems in a physical brain in feedback with reality as perceived by senses from a body.

Ideas are emergent from the intersections of consciousnesses.

Tei
Feb 19, 2011

Are you guys chocking with the idea of ... "information".

Information is observable. You can use a microscope to look at the 1 and 0 in a CD-ROM. And is physical. Is only hard to observe when is on very small things or hard to reach things.

If you build a computer out of water, using hydraulics instead of electricity, you would easily see the information has water stored in there..

https://www.quora.com/How-can-logic-gates-be-used-to-store-data

..or if you are building a computer with red dust in Minecraft, you see the information has illuminated / non illuminated cubes. Information is kinda a wave that moves in a medium.

Consciusness is just some information in your brain.

Bar Ran Dun
Jan 22, 2006




Tei posted:

Consciusness is just some information in your brain.

With feedback and feed forward loops and it’s not digital, it is not 1s and 0s. Brains are analog, waves in a medium are analog.

BrainDance
May 8, 2007

Disco all night long!

Tei posted:

Consciusness is just some information in your brain.

It's very, very much not. Consciousness is a level of organized sentience that is self aware. I have no idea how it could mean anything about just information in your brain, I guess there has to be information but that's not at all the special thing about it.

Consciousness is vague and every definition of it is controversial, though not as much as people act like, there is a kind of general idea of what is consciousness and what isn't. And consciousness is at least what I just said.

Sentience has nothing to do with information but with experience. Sentience is that there's something that it's like to be something. The definition for that isn't nearly as controversial as the definition for consciousness.

Bar Ran Dun posted:

With feedback and feed forward loops and it’s not digital, it is not 1s and 0s. Brains are analog, waves in a medium are analog.

No, why? Who had ever said this? I don't know a single definition for consciousness that has "it's not digital" or "it's analog" as a part of it.

Tei
Feb 19, 2011

BrainDance posted:

It's very, very much not. Consciousness is a level of organized sentience that is self aware. I have no idea how it could mean anything about just information in your brain, I guess there has to be information but that's not at all the special thing about it.

This still sounds like the duality "information on a mass storage device" vs "information in ram, being interpreted by the CPU".


quote:

Consciousness is vague and every definition of it is controversial, though not as much as people act like, there is a kind of general idea of what is consciousness and what isn't. And consciousness is at least what I just said.

Controversy can come from people that resist the idea that our brain is not special.

Edit: I mean, controversy alone mean nothing. Just people with very different ideas and opinions discussing a hard^Wbikesheed topic.


quote:

Sentience has nothing to do with information but with experience. Sentience is that there's something that it's like to be something. The definition for that isn't nearly as controversial as the definition for consciousness.

Experiences are information stored in the brain.

Tei fucked around with this message at 07:37 on Jul 13, 2023

BrainDance
May 8, 2007

Disco all night long!

Tei posted:

This still sounds like the duality "information on a mass storage device" vs "information in ram, being interpreted by the CPU".

As a tortured analogy maybe.

Tei posted:

Controversy can come from people that resist the idea that our brain is not special.

I guess? I have no idea what you're trying to say with this.

Tei posted:

Experiences are information stored in the brain.

Experience, not experiences. And on that, we have no idea. It really depends on where you think experience comes from. Constitutive panpsychism has microexperience as a property of all things, even photons. And macroexperience as a result of microexperiences organized in a certain way together.

Microexperience (or even macroexperience) doesn't even have to actually be experience of anything real in particular. Just that, "is there something that it's like to be a dog/a rock/a photon? Yes? Then that's experience"

There has to be "information" in the sense that it has to be organized in a certain way to be a certain way. But that's the same kind of "information" that there must be about a rock for it to be a rock. It's in no way the important or fundamental thing that makes something.... something. In that way literally everything ever is "information"

BrainDance fucked around with this message at 07:42 on Jul 13, 2023

Iunnrais
Jul 25, 2007

It's gaelic.
The reason why I framed the question in terms of “what would it take for non-naive reasonably intelligent people to, on the whole, treat an AI as conscious” is because distinguishing a p-zombie from a conscious being is, as far as I’m aware, impossible. And I specified intelligent, non-naive, etc, because I know people will sometimes treat their roomba, heck maybe even their desk lamp as conscious sometimes, and I’m not trying to talk about that kind of anthropomorphization.

Because despite being fundamentally unable to distinguish between a p-zombie and actual consciousness, people DO treat other people as if they were conscious anyway! We have at least one category of beings outside ourselves that almost everyone accepts are also conscious: humans.

And we know that people can accept the idea of aliens or AI as having consciousness as well, because if we write a fictional character, consumers of that fiction easily accept, for example, HAL9000 as conscious. As a person.

It does seem like that coastline example. Land is easily identifiable. The sea is easily identifiable. But that boundary point keeps shifting and it’s not quite clear… and I really do believe we are approaching that boundary with our lifetimes, if not within a few years even. I don’t think this is a navel gazing question— knowing what traits people are going to require to accept what was a thing as a person is going to be extremely relevant, real soon now.

I like that “theory of mind” idea… except that how do we determine that something has theory or mind or not? Ask the right kind of questions, and ChatGPT-4, right now, can give answers that creepily feel like it might have theory of mind already. So defining “this this acts like it has a theory of mind because of *these reasons*” seems important to me.

(The last minute or so of https://youtu.be/4MGCQOAxgv4 would be an example of ChatGPT-4 acting like it might have theory of mind, but there are others, and I’ve seen little blips occasionally in my own uses as well)

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

Shifting this to intelligence was maybe my fault, but it does seem like more of an inherent property. I don't mean IQ tests or racist poo poo like that, just that human intelligence, while not directly measurable per se, does seem to be a partly physical thing.
I'm not trying to argue that the physical properties of things (like humans) imputed to be intelligent are irrelevant. Any more than I think that a beer bottle being brown and shiny is irrelevant to whether or not a beetle wants to gently caress it. I just don't think that those properties mean that fuckability-by-beetles is an inherent property of the bottle as opposed to being a behaviour of the beetles.

In the case of intelligence when we want to evaluate whether or not something is intelligent, we invariably use similarity to human behaviour as the court of last appeal. Playing chess was once thought to be something that humans could just do better than algorithms. Then go. And when AIs started to consistently beat humans at these tasks, we decide...well, nah, sure the thing can play chess better than any human...but it's not intelligent. Even a year ago most people would put "engaging in conversation" on the list of things that only an intelligent [whatever] could do. But ChatGPT? I think most people in the thread would agree...not intelligent.

So what's the gimmick? If intelligence is something to do with problem solving and these things can solve particular problems we (at least previously) associated with intelligence, why is something that solves them better than humans not intelligent? And I think that's because intelligence isn't a thing, a property of the thing under consideration. It's a constellation of behaviours associated with how humans interact with their environments. So whenever we get something that ticks some particular box in the long list of human-like intelligent behaviours, it's easy to point to some other one that it doesn't, and conclude that well, no, it's not really intelligent.

And, to be clear, it is within the realm of possibility that there is some simple core commonality to all of the things we associate with intelligence. And that it is reducible to some computational problem. And that it can be abstracted out of all of the random parts of human behaviour that are more or less just side-effects of the way we're put together (like the fact that our decisions aren't actually just the sum of our narrative reasoning about problems but also depend on, for example, how long it's been since we've eaten). But I don't think that there's any reason to believe any of that must be true. And I think the much simpler explanation is that we judge "intelligence" by analogy to human behaviours is because that's exactly what it is.

Rappaport
Oct 2, 2013

We seem to be going around in circles. Though there is that lovely Rotten library article about chess, but in general, I've already posited that my pocket calculators are more efficient at doing mathematics than I am, and I can at least work out simple problems in my head but still, clearly the silicone buddies are more intelligent and efficient than I am. But my pocket calculator would be mystified at an episode of Twin Peaks, although to be fair so am I but in a different way. And I do not think a pocket calculator has a personality, or consciousness, despite it being able to multiply things faster and better than I can in my head. I would posit that I have a personality and a consciousness, but maybe that is vanity on my part.

I realize it was an ooh-aah moment when a computer beat Kasparov in chess, but for some reason it's not such an ooh-aah moment when a robot shoots my fictional head off in some version of Unreal, or whatever the multi-player game of one's choice is. But surely no one associates multi-player robots with agency, or consciousness, aside from cussing at them.

If we are simply hovering around the fact that there isn't a geiger counter for intelligence, or for consciousness, we are in agreement. These are things that emerge out of human behaviour, and I suspect any "reasonable" definition for them must remain in that category. I do not believe there are quantums of intelligence, or of consciousness, that we could detect in particle accelerators, distill and sell as part of Gwyneth Paltrow's wellness waters. But all the same, even if it all revolves around human behaviour, we can realize there are differences in intelligence, so why can't we also say the same about consciousness? I suppose one could try teaching long division to a cat, but I suspect the cat wouldn't like the experience and wouldn't be much better at it than a school child. What does this tell us about intelligence? Not much, if we think intelligence and long division are 1:1, but as human-centric as it is, cats don't seem quite as intelligent as humans do, in the aggregate. And a good thing too, they'd just eat us if the tables were reversed.

But my pocket calculator wouldn't. It doesn't have agency, a cat demonstrably does, try giving one a bath. So, even if it is all about behaviour, I still maintain that consciousness, as human beings recognize it, has to do with things we perceive as familiar. If consciousness, or intelligence, are not similarly measurable as something's chemical composition, which I agree they are not, doesn't make them useless in terms of this conversation, which is about what sort of creatures we as human beings would recognize as similar to ourselves.

Tei
Feb 19, 2011

I have designed this machine to kill p-zombies.




The general idea is mirrors are used to detect self-awaresness. A p-zombie will think the axe attack comes from the mirror direction, while it comes from behind.
A normal human being will know the attack comes from behind and escape the axe.

Hopefully it would be enough to kill p-zombies.

Tree Reformat
Apr 2, 2022

by Fluffdaddy
Chess and conversation engines are ultimately just probability calculators. We ask the question "at what point do we cross over from mere dumb reactive calculation to conscious aware thought?"

But the thing is, I think we also need to ask this of ourselves as well. Were we born conscious? When did you stop being an eating and pooping dumb biological machine, and start being a conscious person? Was it when you started having sustained memories? Was it when you learned how to talk to others? When you learned to read? When you realized and internalized a world exists beyond your home and school?

Our childhoods, in our memories, are vague and uneven. You probably remember moments of you saying or doing things in kindergarten that would be absurd to you now, to the extent you might wonder what and how you were even thinking at the time. The only thing I think we can say for sure human children have that AI definitely doesn't is sustained memory of personal experiences, and the ability to think about, that is, perform calculation on those experiences to add yet more information to that sustained memory.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Tree Reformat posted:

Our childhoods, in our memories, are vague and uneven. You probably remember moments of you saying or doing things in kindergarten that would be absurd to you now, to the extent you might wonder what and how you were even thinking at the time. The only thing I think we can say for sure human children have that AI definitely doesn't is sustained memory of personal experiences, and the ability to think about, that is, perform calculation on those experiences to add yet more information to that sustained memory.

The difference is that the child lived its experiences and AI had the experiences fed to it. But AI builds on top of them -- I assume it adds the prompts it is given and the feedback from its handlers to its calculations. They become sustained memory.

Reveilled
Apr 19, 2007

Take up your rifles

Iunnrais posted:

The reason why I framed the question in terms of “what would it take for non-naive reasonably intelligent people to, on the whole, treat an AI as conscious” is because distinguishing a p-zombie from a conscious being is, as far as I’m aware, impossible. And I specified intelligent, non-naive, etc, because I know people will sometimes treat their roomba, heck maybe even their desk lamp as conscious sometimes, and I’m not trying to talk about that kind of anthropomorphization.

Because despite being fundamentally unable to distinguish between a p-zombie and actual consciousness, people DO treat other people as if they were conscious anyway! We have at least one category of beings outside ourselves that almost everyone accepts are also conscious: humans.

And we know that people can accept the idea of aliens or AI as having consciousness as well, because if we write a fictional character, consumers of that fiction easily accept, for example, HAL9000 as conscious. As a person.

It does seem like that coastline example. Land is easily identifiable. The sea is easily identifiable. But that boundary point keeps shifting and it’s not quite clear… and I really do believe we are approaching that boundary with our lifetimes, if not within a few years even. I don’t think this is a navel gazing question— knowing what traits people are going to require to accept what was a thing as a person is going to be extremely relevant, real soon now.

I like that “theory of mind” idea… except that how do we determine that something has theory or mind or not? Ask the right kind of questions, and ChatGPT-4, right now, can give answers that creepily feel like it might have theory of mind already. So defining “this this acts like it has a theory of mind because of *these reasons*” seems important to me.

(The last minute or so of https://youtu.be/4MGCQOAxgv4 would be an example of ChatGPT-4 acting like it might have theory of mind, but there are others, and I’ve seen little blips occasionally in my own uses as well)

I'm not much of a philosopher, but my intuitive understanding of what consciousness is, is that it's a state of continuous experience in which the entity receives sensory data from the outside world to create a model of that world in its own mind, and then acts upon that model in real time to achieve its goals. I can see how it would be possible under this definition to create a conscious machine easily (in fact, I'm pretty sure that conscious machines already exist), but I don't think that matters much; I think the robot dog Boston Dynamics made is probably conscious, but that wouldn't mean if I owned one I'd tremble at the moral implications of switching it off. The only possible wrinkle I see is that machines might not fit the "continuous" part since the nature of a computer is that it operates in discrete cycles, and I don't have an answer to that but I don't think it fundamentally changes my relationship with the robot dog--to me it appears conscious. Surely the question behind the consciousness question is "what would it take for non-naive reasonably intelligent people to, on the whole, treat an AI as a person"?

I think that's a much, much harder question to answer and only overlaps with the consciousness question without being equivalent to it. We generally consider humans to be people even if they have no consciousness at all (imagine someone assaulting a coma patient), we even seem to consider some of that personhood to linger behind in their things (if someone desecrates a dead body, I think most of us consider that an offence against the dead person, even those who believe the dead do not go to some afterlife where they exist in an immediate sense). To me personhood seems to have something to do with potential. If we imagine a person as an individual creature who possesses consciousness and sufficient intelligence to pursue complex, self-actualising goals and attain them, we grant personhood to any creature who has that property, previously had that property, or could have had that property but for some unfortunate twist of fate.

I can imagine an AGI which is given the goal of "attempt to achieve a state for yourself that most humans would consider optimal if it happened to them" (let's handwave away alignment questions and assume we've got a way to get the machine to behave in accordance with some semblance of morality). If this AGI builds a robot body for itself, buys a house, finds a spouse, adopts some children, and takes up painting as an artist, I think I'd find it impossible not to accept that AGI as a person. Now if we imagine that by a twist of fate the AGI is instead given the goal of "maximise paperclips", is it still a person? I think yes, based on the creature's potential to pursue person-like goals. Which I think leads to the conclusion that we out to treat all conscious AGIs as people. Which means we probably shouldn't build them in the first place?

Regardless I don't think of GPT or other LLMs as people, because I don't think they're conscious and they don't have sufficient intelligence to pursue complex self-actualising goals. The only goal they seem capable of pursuing is "work out what sequence of words needs to come next in this sequence to maximally satisfy the human (or testing AI) on the other end". That can produce very life-like responses, but it's still not an AGI.

Reveilled fucked around with this message at 14:26 on Jul 13, 2023

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Doctor Malaver posted:

The difference is that the child lived its experiences and AI had the experiences fed to it. But AI builds on top of them -- I assume it adds the prompts it is given and the feedback from its handlers to its calculations. They become sustained memory.
Humans already come with a tremendous amount of built-in software in our brains. Everything from social instincts to proprioception. It’s much more elaborate than any ML software.

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

If we are simply hovering around the fact that there isn't a geiger counter for intelligence, or for consciousness, we are in agreement. These are things that emerge out of human behaviour, and I suspect any "reasonable" definition for them must remain in that category. I do not believe there are quantums of intelligence, or of consciousness, that we could detect in particle accelerators, distill and sell as part of Gwyneth Paltrow's wellness waters. But all the same, even if it all revolves around human behaviour, we can realize there are differences in intelligence, so why can't we also say the same about consciousness? I suppose one could try teaching long division to a cat, but I suspect the cat wouldn't like the experience and wouldn't be much better at it than a school child. What does this tell us about intelligence? Not much, if we think intelligence and long division are 1:1, but as human-centric as it is, cats don't seem quite as intelligent as humans do, in the aggregate. And a good thing too, they'd just eat us if the tables were reversed.
A statement like "a human is smarter than a cat" is fine in an informal conversational context, but if it's just saying "a human is more like a human than a cat is like a human", it isn't particularly informative as an approach "intelligence" as a technical matter. And if intelligence isn't a single thing but a semantic convenience we've developed to encompass a multitude of disparate behaviours (computational "pocket calculator" stuff, retention and retrieval of knowledge, rule acquisition, domain-specific expertise, understanding social cues, general problem-solving, creativity, inference, extrapolation, &c, &c) then treating it as if it's basically a scalar quantity that you roll 3d6 for isn't a profitable approach. It's actively confusing.

Like consider a house. If we're listing things we care about in a house, we might start with square footage. And number of bedrooms and bathrooms. State of the plumbing. Does the foundation have any cracks. How's the insulation and what kind of windows does it have. What sort of HVAC system. What's the neighborhood like. What school district is it in. Are there major appliances and if so how old are they. What size lot is it on. And so on.

And we can take all of these factors and encapsulate them in a single number, like a price (we could substitute some other metric, like a 1-10 rating or something, but I don't think that changes the fundamental argument here). But although all of the raw physical "stuff" I enumerated above are connected to the price, I don't think the price is an expressive way of talking about all that stuff. Much less how "good" the house is at actually being a house. Like it's "shelter-ness". The price reflects a sort of social estimate of the "desirability" of the house, and that's something that's embedded in the social context the house exists in, not the timbers and drywall of the house itself.

And then things get even worse if we now want to start talking about, say, a bird's nest or the shell of a sessile marine snail. It just doesn't make sense to talk about what kind of school district a tubeworm in the Northeast Pacific Basin lives in. If we were really motivated to win an argument we might cobble together a definition of "shelter-ness" that has little to do with how either humans or tubeworms evaluate their homes, but I think the fundamental underlying reality is that the situations are mutually incommensurable except in the most contrived or simplistic ways.

That's because the metrics humans use for evaluating human habitations are ineluctably socially mediated. And a similar argument applies to the definition of "intelligence". And I think the domain of things encompassed by "intelligence" is larger and more complicated than the domain of things encompassed by "house".

Rappaport
Oct 2, 2013

Are the SA forums not an informal context? But that's a dodge.

I have repeatedly stated that there isn't a device that can sort out who is more intelligent than whom, and therefore intelligence is something of a social construct. But at the same time, it also has a physical component to it; were I to place a screw-driver behind my eye and poke around a little, I would most likely lose some of my ability in mathematics, languages, and so on. Similarly, cats are not building an Apollo program any time soon, though I suppose one could blame their lack of thumbs for this.

I genuinely don't comprehend what the purpose of this line of reasoning even is. We, humans collectively, can appreciate that some people are more intelligent than others, but a scant few are more intelligent than pocket calculators, for that specific task. The entire premise of a lot of popular fiction writing, like Sherlock Holmes or his modern iteration doctor House, is that some people are very much more intelligent than others. Quick on their feet, or other English turns of phrase that you'd prefer.

I also actively rejected rolling dies for intelligence "scores" among human beings, and why not artificial (is that pejorative?) beings as well. There is no such varmint, as the Donald Duck comic says.

But all of this hemming and hawing simply tells us that humans are social beings, and we associate some level of social approach to our understanding of consciousness. A cat is smarter, or more intelligent, if you like, than I am about some things such as murdering small rodents, and I am better, or more intelligent than the cat, at working out derivatives and integrals. That is not a value judgment on the cat, or me, it's a statement of observable fact. Similarly, in education we find that some students are more or less receptive to this or that subject, and that is not an indictment on them, some are better at lingual tasks and others at mathematically oriented tasks. Intelligence is fuzzy, precisely because it is not an SI unit, but a construct of the human mind.

If the major complaint here is that we don't have an SI unit for consciousness in AIs, or people, then I still feel we are in agreement, because I do not believe such a thing can ever come to pass. And a small part of me hopes it cannot, because we know what happens once humans start ranking people based on their perceived abilities.

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

I genuinely don't comprehend what the purpose of this line of reasoning even is. We, humans collectively, can appreciate that some people are more intelligent than others, but a scant few are more intelligent than pocket calculators, for that specific task.
We, collectively, can appreciate that some people are more beautiful than others. The Miss America competition has been held for a little over a century, starting in 1921. The first African-American woman to win the competition was Vanessa Williams, in 1984.

The physical characteristics of the contestants play a part in the determination (there's a talent portion but I don't think that changes the point). So in theory we could ask "what physical characteristics would an African-American woman need to win the Miss America competition"? But I don't think any answer to that question tells you much about the physical characteristics of African-American women in the US in the years between 1921 and 1984; everything meaningful it tells you is about the social values in the US, and particularly among pageant organisers, in that time period. So the framing of the question is a fundamental problem, because it is predicated on a categorical error.

If "intelligence" and "consciousness" aren't a single thing which is inherent in the thing being considered, then it is the same sort of categorical error as apparent (I hope) in the pageant example.

Rappaport
Oct 2, 2013

Now we're circling back to the physical portion, with your example. I have repeatedly stated that intelligence is both partly a social construct and partly something physical. I do not see what the beauty pageant example has to do with this; I like certain types of people, and other people like other people, but your very example seems to suggest that popularity contests select for certain norms of "beauty". My pocket calculator won't get an h-index higher than mine, but that just emphasizes the point that all of these things are human-centric.

All the same, I, with my flawed human brain, think a dog or a cat possess a consciousness, but my pocket calculator does not. If the pocket calculator became vastly more intelligent, I posit people would still have trouble with it because if it does not exhibit mammal-like, or human-like, behaviour, it would be difficult to accept as an equal. Or superior, but whichever, the thing hinges on what human people find to be their peers or their betters.

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

Now we're circling back to the physical portion, with your example. I have repeatedly stated that intelligence is both partly a social construct and partly something physical. I do not see what the beauty pageant example has to do with this; I like certain types of people, and other people like other people, but your very example seems to suggest that popularity contests select for certain norms of "beauty". My pocket calculator won't get an h-index higher than mine, but that just emphasizes the point that all of these things are human-centric.
Your argument appears to be that a) "intelligence" isn't directly measurable (in the sense there's no single observable quantity that we can objectively measure to determine it), but b) we (people in general) can nevertheless collectively just point at things and say that one is "intelligent" and one is not, and further that we can rank them. And the latter appears to be intended to be an argument in favour of the idea that "intelligence" is a property that inheres in the object it's attributed to (that is, that there really is some "stuff" that's more present in some things than others, and that's what's being evaluated, albeit indirectly).

Feel free to correct any of that that's wrong.

The point I'm trying to make in bringing up beauty pageants is that this is precisely an example of where people can point at things and say that one is in one category, one is not, and rank them...but that this doesn't tell us anything meaningful about the properties of the things being labelled. You talk about the possibility of your calculator becoming "vastly more intelligent", such that a thing we currently accept is not "intelligent" would then be "intelligent" (or at least, by implication, "intelligent" enough that making the evaluation would be a conundrum of some sort, involving weighing the "intelligence" against of its behavioral differences). The same sort of thing can't be said about the Miss America pageant before 1984. Imagining women of colour becoming "vastly more beautiful" as a "solution" to winning the pageant is a categorical error, because women of colour not winning the pageant isn't, I hope we agree, because of some defect in their appearance, but because of the social values of the people making the judgement. Namely, racism.

I'm saying "intelligence" is (or at least seems to me to be) like that. Not that "intelligence" here is identically as racist (although it sure as gently caress has a history of being used that way), but I am suggesting that it is (or at least strongly seems to be) every bit as much of a social construct. I understand that you also think there's a social aspect to "intelligence", but I think you're arguing that this manifests in how we perceive some "stuff" (whatever you'd have to scale in your calculator to make it "intelligent") in the thing itself, where I'm saying that it is (or seems to be) more or less entirely "just" a set of social values. In the sense that the history of the Miss America winners doesn't tell us anything important about the number of beautiful women of colour in the US before 1984, but instead tells us a great deal about racism in the US.

Rappaport
Oct 2, 2013

I think your summation of my position is largely correct, even if slightly uncharitable, but that is my fault for being a bad debate and discussioneer. Which actually just goes to my point! The degree to which I can argue my position is a function of my intelligence, and my intelligence at discussioning is inherent to me, at least to some degree. I've read a bunch of sci-fi books so those colour my perceptions and 'inform' (for lack of a better term) my argumentation, but my perceived intelligence or lack thereof is both inherent to my person (ultimately the stuff inside my skull) and other people's perceptions of it. It goes both ways.

I'm not convinced the beauty comparison is that apt. I think certain people are beautiful, and that's inherent to them, but the perception of their beauty is entirely in me, and not them. There is no similar distinction with intellectual capabilities. Intellectual capabilities are physical, removing brain tissue or starving a person makes them less capable, but the capabilities we are discussing are still recognizable. If someone gets into a car accident and maimed, their perceived beauty may be ruined for some observers, but say someone with strong feelings for them still regards them as beautiful. This is all entirely in the heads of other humans. I posit that the same is not true for intelligence, even if that too can be ruined by physical accidents or violence. Intelligence is a thing that is there regardless of other people's feelings, and therefore a fundament of the creature being observed. And again, schooling, nutrition, things like that affect it, but it is a concept that describes the inner workings of the thing being observed or interacted with. The entire idea of children and adults experiencing schooling hinges on this idea. Our social interactions may also hinge on ideas such as beauty, because humans are social animals, but intelligence as a concept IMO differs in the respect that can be tested, in a school setting, more objectively than beauty can in a popularity contest. Of course we can say that schooling in general is a waste of time and effort, because human intelligence is a social construct and so forth, but this brings us straight back to the popularity contest where this was argued and lost. Intelligence and consciousness are qualities that I would say are inherent to an object or creature that is being considered in a different way than beauty. Beauty absolutely cannot be measured in any way that is objective, Nazi skull measuring included, but we as a society do attempt to measure intelligence by exposing children to a school process that takes up most of their childhood. I wouldn't subscribe to the idea that school, as an ideal, is the same thing as a beauty pageant, but of course it is a human institution and charismatic, or beautiful in this context, people will have an easier time.

To re-iterate: It is my position that for human beings, for a large portion anyway, to recognize an AI as having a consciousness, would necessitate the AI having human-like characteristics. That is what we as mammals expect. You are perfectly correct in saying that it doesn't really touch upon the fundamentals of what consciousness is or what it can be, but that is also my point. Humans view this through our own, mammal brains, and this can be manipulated and it can also be deceived in the other direction, to think that something that has a consciousness does not because it doesn't have a cute face and won't engage in what we consider social behaviour. This does not invalidate the concept of consciousness, but it does mean it's something to consider.

Tei
Feb 19, 2011

Appreciating inteligence is not always objetive.

Since inteligence is most probably a aggregate of things, somebody can see has "dumb" some person for having poor memorability, even if scores big in all other skills.

Isaac Newton was getting distracted by his cats meowing asking for him to open the door, so he made holes in his doors, since he owned multiple cats, he made multiple holes. People that saw that probably through that Isaac Newton was a dumbass. Where here in 2023 we think he was a genius.

We know IQ test favor people of some ethnic origin over others.

Rappaport
Oct 2, 2013

If you want to fault Newton for something besides his banking career, you could point at him trying to fan-fiction math the dimensions of New Jerusalem. This doesn't disabuse me of the notion that he was intelligent, just that he had weird interests.

Which, again, doesn't really help us defining what a human-approach to AI would be.

Tei
Feb 19, 2011

Rappaport posted:

If you want to fault Newton for something besides his banking career, you could point at him trying to fan-fiction math the dimensions of New Jerusalem. This doesn't disabuse me of the notion that he was intelligent, just that he had weird interests.

Which, again, doesn't really help us defining what a human-approach to AI would be.

We have a lot of experience with naturalGI because every person is a naturalGI. We don't have much experiences with naturalGI that are very different than us.
But we have experience with people with inteligence level like us, but are slighly different: people in the autism spectrum.
People torture and have bad manners with people on the autism spectrum every day. And they consider themselves superior to austist/aspergers.

For me, this inform enough for what we can expect people to behave and consider a AGI.

- People will consider AGI dumb, even if show genius levels
- People will consider AGI subject to abuse

This is a bit in the negative side, but I am not trying to stack negative things here.

A person can change when receive power. A person can be good withouth power and evil with power. Some of us will still be a angel with power.

Tei fucked around with this message at 08:03 on Jul 14, 2023

BoldFace
Feb 28, 2011
A sufficiently intelligent AGI should be able to make its own case and convince any reasonable human being that it's at least as intelligent as them.

Tei
Feb 19, 2011

BoldFace posted:

A sufficiently intelligent AGI should be able to make its own case and convince any reasonable human being that it's at least as intelligent as them.

But only if she/he made the signals that people uses. If is too alien, it may even be undetectable. Somebody made this claim in this thread a few post ago.


Also half of USA voted for Trump. Arguabily this mean a lot of people is stupid and have no idea what smart looks like.

Owling Howl
Jul 17, 2019
For an AI to develop a consciousness we would probably have to motivate it. Calculators can perform tasks but only if we tell it to so we can't merely improve on calculators. We may imagine a machine with a full suite of sensors to absorb data with and a detailed map of the world and it's own physical body in that world. It might be able to perform complex tasks but it won't have any reason to do anything unless we explicitly told it to. I don't see why motivation would come about organically simply by giving it more computing power, senses or data.

Essentially we would have to give it needs and desires if it's not going to just sit passively and stare into a wall until someone tells it to do something. For a consciousness or a personality to develop it would likely need to constantly reflect on experiences and knowledge and adapt to it.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Tei posted:

Information is observable. You can use a microscope to look at the 1 and 0 in a CD-ROM. And is physical. Is only hard to observe when is on very small things or hard to reach things.

I think it's worth pointing out that the physical stored 1s and 0s are absolutely not information of the sort you seem to believe it is. The same 1s and 0s in the same sequence could and do mean very different things depending on the system they are embedded within, and it is only within the context of that system that the sequence becomes actual information on anything but where the 1s and 0s are.

SubG
Aug 19, 2004

It's a hard world for little things.

Rappaport posted:

Intelligence is a thing that is there regardless of other people's feelings, and therefore a fundament of the creature being observed.
"Brown and shiny" are fundamental properties of (some) beer bottles. But the locus of "brown and shiny = fuckable" is the beetle. Therefore beer bottle fuckability isn't an intrinsic property of beer bottles, it's a (for want of a better word) social construct of beetles. Are we together so far? Okay, somewhere along an Australian highway there's a brown, shiny beer bottle and a beetle discovers it. And he obviously wants to gently caress it. Then, for some reason, a paint truck drives by, a can of yellow paint fortuitously (for us, not so much for the beetle) rolls off the back, splits open, and covers the bottle with flat yellow paint. The beetle no longer wants to gently caress the beer bottle (as an aside: in talking about beauty you appeal to individual preferences, but even if we found an individual beetle that wanted to...or even preferred to...gently caress yellow beer bottles, this wouldn't change the point; it would just make it slightly more complicated to talk about, just as that one guy from google thinking ChatGPT is conscious doesn't terminate discussion about what it would take for people to decide an AI is conscious). This doesn't change our notion of where "beer bottle fuckability" resides. Even though the beetle's social behaviour changed as a result of changes that happened entirely in the physical properties of the bottle.

You're saying there's something that goes on inside objects to which intelligence is imputed independent on the social construct being used to evaluate them. Sure. Just like a beer bottle has a colour and texture independent of whether or not a beetle is currently trying to hump it. Whatever property you're observing in the object imputed to be "intelligent"...ability to do math or sensitivity to social cues or capacity for language or abstract reasoning or whatever...isn't inherently "intelligence" any more than "brown and shiny" is inherently "fuckable". Or at least it certainly doesn't seem to be. McCarthy et al coined the term "artificial intelligence" in the '50s and Searle came up with the Chinese Room in the '80s and oceans of ink have been spilled on the nature of "intelligence" and there's still no consensus. It is of course always possible that there really is some simple core concept that links together the jumbled mess of things that we associate with "intelligence", but if there is one we haven't been able to find it, and it's not for lack of trying. So my suggestion is that perhaps the reason why intelligence seems to be a jumbled mess of disjointed ideas is because that's precisely what it is.

Tei
Feb 19, 2011

GlyphGryph posted:

I think it's worth pointing out that the physical stored 1s and 0s are absolutely not information of the sort you seem to believe it is. The same 1s and 0s in the same sequence could and do mean very different things depending on the system they are embedded within, and it is only within the context of that system that the sequence becomes actual information on anything but where the 1s and 0s are.

Even the output of a random generator is information. What is not is knowledge.

But sure, to play a music disc, you need a phonograph. The music disc is not enough at all. Is just a part of a system that may output music, but alone does nothing is useless.

Thats true, but I don't know what is your point.

Bar Ran Dun
Jan 22, 2006




If we make an artificial mind, that minds source of feedback will be from us, it won’t be alien to us.

Adbot
ADBOT LOVES YOU

Tei
Feb 19, 2011

Bar Ran Dun posted:

If we make an artificial mind, that minds source of feedback will be from us, it won’t be alien to us.

Do you reconoce your own voice on a record ?

I have my doubts.

First, we can build weird poo poo, not just antropomized stuff.
It can be informed from something else than us.
Self-learning machines are a black box.
It could be like us, but primitive, rendering it alien.
it could be like us, but like a person with asperger or autism, rendering it alien to most poeple.
It could be build by a AI, and not us. Maybe they would talk in a made up language that only them understand.
Will not have human experiences.
Will not have a human body.
Will not have a human lifespan.
Will not have human needs.
Will not have human instincts.
It may be more rational than what humans are acustomed, making talking with him hard the way sometimes is hard to talk with programmers and other logical people.
It could just be designed around a different design than us. Making it very different to us in a very deep way.

Tei fucked around with this message at 00:12 on Jul 15, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply