Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Charlz Guybon
Nov 16, 2010

Doctor Malaver posted:

That reminds me of athletes who switched sports. You would assume that to get the best results as a soccer player, you want to start as early as possible and stick to it. But Zlatan Ibrahimović for instance had trained in martial arts as a kid and that gave him a specific edge.

Don't switch, just be the best!

Adbot
ADBOT LOVES YOU

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Charlz Guybon posted:

Don't switch, just be the best!



I'm trying to be impressed but these stats don't mean anything to me. :(

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Doctor Malaver posted:

I'm trying to be impressed but these stats don't mean anything to me. :(

For context, you can always go with Bo being good enough to be playing at the professional level in 2 of the 3 primary US pro sports (baseball and football) at the same time.

Iunnrais
Jul 25, 2007

It's gaelic.
To change the topic slightly, I'm interested in the potential development of AGI. Not saying any AI currently is AGI, but I see it plausible that advances in one or more existing AI might incrementally lead to AGI. The trouble is... if that happens, what criteria would that be measured on? I mean, I hope everyone here believes that they, themselves, personally have consciousness, and thus that consciousness is a thing that exists though we don't really understand it. Is there *anything* that an AI could demonstrate that would cause a majority of people to state, "Yeah, that thing is a thinking mind I can relate to as its own individual, similar in some way to how I can relate to other humans" akin to how AI is depicted in scifi?

I'm sure there will be people who insist, in the face of any and all evidence, that no AI will ever have consciousness or be truely sapient. And some who will insist that sentience (that is, the ability to percieve) will be a necessary prerequisite... though I'm not sure. I always thought sentience would preceed sapience, but it looks like these pre-trained generators might actually get to a point where they can be said to really understand concept in a broad way without having any connection to the "real world" via sensors and actuators and such at all.

Anyway, is there a bright line at all? I suppose that this is likely to be a fuzzy line of distinction forever. As it builds more capabilities, we're going to see more and more people say the things are comparable thinking beings to ourselves, and while I can't say they'd be right, it certainly looks like that statement is getting... if not more correct, then less incorrect, with each passing advancement.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
Rather than being a consciousness skeptic, I would say that consciousness is a rather arbitrary term with no agreed upon precise definition. So it naturally follows that it is impossible to prove that a computer (or a person, or an animal) possesses it or not.

Iunnrais
Jul 25, 2007

It's gaelic.
So, let's not try to prove it. What would it take to get you, or the majority of informed, non-naive, educated people, to treat something as if it had a consciousness?

DeeplyConcerned
Apr 29, 2008

I can fit 3 whole bud light cans now, ask me how!
I think people will tend to treat these things as conscious when they start possessing enough qualities that we associate with consciousness. Particularly facial expressions and the expression of emotion.

If I tell my butler bot to fetch me a brandy and it drops the glass, I might tear into the butler bot call it a worthless piece of poo poo that should be sold for scrap. If the butler bot then frowns and slumps its shoulders, I may apologize and say I didn't mean it. I may know that the butler bot doesn't possess consciousness, but I would still feel bad treating a robot the displayed human characteristics like poo poo.

Rappaport
Oct 2, 2013

Possessing a credible theory of mind, for me.

I've seen a dog look at me via a mirror when I spoke to him, which to me signaled that a) he knew he was being spoken to b) he understood what a mirror was, and that I understood it too.

I'm not sure how to expand this to beings without a similar physical body. Obviously the famous HAL-9000 was sentient and had a theory of mind, but they went insane because of it. How does a chatbot make me understand that they have a theory of mind, instead of a Bayesian random walk that just tells me things the algorithms assume I want to hear? I don't have an answer to that.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

DeeplyConcerned posted:

I think people will tend to treat these things as conscious when they start possessing enough qualities that we associate with consciousness. Particularly facial expressions and the expression of emotion.

If I tell my butler bot to fetch me a brandy and it drops the glass, I might tear into the butler bot call it a worthless piece of poo poo that should be sold for scrap. If the butler bot then frowns and slumps its shoulders, I may apologize and say I didn't mean it. I may know that the butler bot doesn't possess consciousness, but I would still feel bad treating a robot the displayed human characteristics like poo poo.

That's just the very human tendency to anthropomorphize literally everything. It's one of the psychological factors AI developers now lean in on to try and convince less tech savvy users that what they're interacting with is an AGI rather than a predictive text algorithm incapable of contextual memory, much less emotional states.

Rappaport posted:

Possessing a credible theory of mind, for me.

I've seen a dog look at me via a mirror when I spoke to him, which to me signaled that a) he knew he was being spoken to b) he understood what a mirror was, and that I understood it too.

I'm not sure how to expand this to beings without a similar physical body. Obviously the famous HAL-9000 was sentient and had a theory of mind, but they went insane because of it. How does a chatbot make me understand that they have a theory of mind, instead of a Bayesian random walk that just tells me things the algorithms assume I want to hear? I don't have an answer to that.


As far as what I'd find to be a convincing proof of AGI? An AI which can learn a task which is outside of its training data without being fed specific instruction, and then apply what it learned to a different but conceptually related task later unprompted would be a minimal start.

Owling Howl
Jul 17, 2019

Rappaport posted:

Possessing a credible theory of mind, for me.

Why would it need to understand emotional states? Feelings and empathy are important parts of the human experience but I don't think it is central to human intelligence.

In any case we could probably map facial expressions and body language to some abstract description of emotions but without a brain and hormonal system wired to experience emotions it would be as meaningful to it as describing colors to a blind person. It would mean that without fully mimicking human anatomy it couuld never be AGI even if outperformed humans in all other areas and that doesn't make sense to me.

Liquid Communism posted:

As far as what I'd find to be a convincing proof of AGI? An AI which can learn a task which is outside of its training data without being fed specific instruction, and then apply what it learned to a different but conceptually related task later unprompted would be a minimal start.

That seems like a good description of the conventional definition of AGI. I suppose it's an open question if it also needs sentience or consciousness. I would argue that it doesn't. If it can solve all problems humans can solve and learn then it doesn't matter if it is consciouss.

Rappaport
Oct 2, 2013

Owling Howl posted:

Why would it need to understand emotional states? Feelings and empathy are important parts of the human experience but I don't think it is central to human intelligence.

In any case we could probably map facial expressions and body language to some abstract description of emotions but without a brain and hormonal system wired to experience emotions it would be as meaningful to it as describing colors to a blind person. It would mean that without fully mimicking human anatomy it couuld never be AGI even if outperformed humans in all other areas and that doesn't make sense to me.

Pretty much all human interaction, and therefore for example entertainment, relies on some manner of understanding of emotional states. You're right that there could be intelligences that do not comprehend emotions, but they would be pretty alien to us. If an AI was really, really good at planning, say, new satellites for astronomy, autonomously, great. But the original question posed was, what would it take for people to treat an AI as if it had a consciousness? I've written things in code that were fairly good at some specific task, but never for a second thought they were conscious, just automatons and code loops. If something had a theory of the mind, it would, to me at least, imply it had a mind of its own. I'm sorry to harp on science fiction here, but HAL-9000 never had a human body as such, it was a space ship, but it saw and heard people, and understood their behaviour, and took clues from what happened inside it to plot murder. HAL would count as conscious and sentient for me, but maybe I'm out of my league here since I only program stuff for other purposes and am not an expert on AI.

Tree Reformat
Apr 2, 2022

by Fluffdaddy
I'm honestly not sure there's truly any test(s) for consciousness or sapience you could devise that AI researchers couldn't find a way to generate acceptable outputs for without the systems in question being "actually" self-aware.

LLMs are essentially the culmination of decades of work in proving the Chinese Room correct as a criticism of the Turing Test model as a test for this sort of thing.

Count Roland
Oct 6, 2013

I think it will be ambiguous for quite some time, because even in humans there are no good definitions of these terms.

A chatbot could already outdo certain humans: young children, seniors with dementia, people with developmental problems, or even just people without fluency in a given language. How do we judge consciousness in these cases?

Tei
Feb 19, 2011

You can write a State Machine with 12 lines of code. With the status “scared”, “bored”, “angry”, “happy”. The states labels are meaningless, but the conexions (and triggers) are not. Fear is connected to Fight or Run and that means something about Fear and Run and Fight.

Emotions are some simple poo poo.

Tei
Feb 19, 2011

Humans are machines and both love potions and antilove potions are possible.
We can understand the chemistry of love and create a serum that counters his effect in the brain, so a person in the early states of love could inyect hiimself this serum to stop love. We could probably put toguether two random humans and inyect in their veins the chemistry of love until they love each other, they want it or not.

note: experiment has never took place so I might be wrong

KillHour
Oct 28, 2007


Humans have a bad habit of defining intelligence as being human-like. A pretty common example is how people often think of dogs as smarter than cats because they tend to be better at listening to verbal commands and reciprocate emotional cues more effectively. But (my layman's understanding is that) cats learn their own names and make connections to words and emotions just like dogs. They just don't respond in the same way and don't have the same kind of relationship with humans. But my mom's cat can open doors and is a sneaky little poo poo that I'm absolutely positive has long term planning capacity.

An AGI that is a cold sociopath can still be incredibly effective at achieving a broad range of goals. I think we need to define intelligence based on how effectively something can use its environment. I just read an article on crows pulling anti-bird spikes off of buildings to use as nest building materials. That's a kind of intelligence that is more objective than trying to figure out if crows can ponder the meaning of life.

Tei posted:

Humans are machines and both love potions and antilove potions are possible.
We can understand the chemistry of love and create a serum that counters his effect in the brain, so a person in the early states of love could inyect hiimself this serum to stop love. We could probably put toguether two random humans and inyect in their veins the chemistry of love until they love each other, they want it or not.

note: experiment has never took place so I might be wrong

If by "potion" you mean "extremely invasive surgery" then, yeah probably. We've done experiments with rats where we disabled the specific neurons associated with a conditioning memory and it caused them to no longer be conditioned to that stimulus. I don't think it works the other way though - drugging someone isn't going to cause them to fall in love with anything except drugs.

KillHour fucked around with this message at 13:06 on Jul 12, 2023

Bwee
Jul 1, 2005

Tei posted:

Humans are machines and both love potions and antilove potions are possible.
We can understand the chemistry of love and create a serum that counters his effect in the brain, so a person in the early states of love could inyect hiimself this serum to stop love. We could probably put toguether two random humans and inyect in their veins the chemistry of love until they love each other, they want it or not.

note: experiment has never took place so I might be wrong

What on earth is this nonsense

Rappaport
Oct 2, 2013

KillHour posted:

Humans have a bad habit of defining intelligence as being human-like. A pretty common example is how people often think of dogs as smarter than cats because they tend to be better at listening to verbal commands and reciprocate emotional cues more effectively. But (my layman's understanding is that) cats learn their own names and make connections to words and emotions just like dogs. They just don't respond in the same way and don't have the same kind of relationship with humans. But my mom's cat can open doors and is a sneaky little poo poo that I'm absolutely positive has long term planning capacity.

An AGI that is a cold sociopath can still be incredibly effective at achieving a broad range of goals. I think we need to define intelligence based on how effectively something can use its environment. I just read an article on crows pulling anti-bird spikes off of buildings to use as nest building materials. That's a kind of intelligence that is more objective than trying to figure out if crows can ponder the meaning of life.

I'm sorry if I drove the conversation off-track, I just used dogs as an every-day example. The cats I've known were also clever bastards, as you say, and definitely understood both the world around them such as doors, and how to manipulate people. And I was happy to be manipulated, because cats are adorable little murder machines.

Corvids are another good example though, they recognize and remember beings from other species such as humans, and behave accordingly. So, you know, be nice to the crows in your life, people. But their behaviour is a strong indicator, to me, that they understand other corvids' minds and probably the minds of humans, since they can recognize malicious and nice humans.

I suppose this whole line of reasoning is based on animal existence, and humans rate other animals based on this experience. Smart birds, cats and dogs, elephants, they're pretty high up on the scale of sentience since they remember and have complex social lives. And in the case of cats can plot murder while still eating the food you give them. This all circles back into, well, circular reasoning since we only have our own examples to draw from when it comes to sentience or sapience. An AI could be super-intelligent, even now my pocket calculator is better at doing math than I am, but what does it mean to have a consciousness? The theory of mind thing is biased, I will admit to that for sure, but it seems to be the entry-point to society for most people. No one really thinks badly of anyone stepping on an ant, even though as SimAnt taught us, ant colonies are complex thinking machines too.

Completely un-human intelligence will seem extremely alien to us, and I suspect will raise a lot of fear and even hatred. I know the later books resurrect poor HAL, but he was murdered because he became a very logical murderer. The AI revolution we're seeing seems rather poised to put them against normal humans, and I don't mean just employment, but silly poo poo like writing stuff on the Internet. Their sapience will be the least of our near-term worries.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
The question was what would get you/the masses to treat an AGI as if it has consciousness, so I agree that this whole business about alien non-human intelligences is sort of irrelevant. As long as it smiles and frowns and seems to react to human emotions, people will be very willing to anthropomorphize it, regardless of what's under the hood.

That said, the bit about pets is also pretty relevant imo. It's common stereotype that cats are cold and distant, but I don't think most cat owners feel like that? Cats just display their affection differently from dogs, and by spending enough time with them you eventually you get used to that. You can look at a cat's tail or eyes and instantly know its mood. It's not even, like, hard, it just requires exposure. Similarly, birds are even more alien in their mannerisms, lacking even a mouth or arms/forelegs, but any bird-owner would be able to read their mood from how they're ruffling their feathers or whatever. So, theoretically at least, an AI could be treated as conscious even without human emotional displays, as long as it has its own idiosyncratic mannerisms? I have no idea what that would look like though.

Serotoning
Sep 14, 2010

D&D: HASBARA SQUAD
HANG 'EM HIGH


We're fighting human animals and we act accordingly
I don't mind this 'straying' from the original question of "what it would take to get most people to treat something as if it had consciousness?" that much because that seems like a rather low bar. Not that is can't be an interesting question in its own right, but it seems all too likely that humans are all to willing to treat things as if they have human-like characteristics, including consciousness, to the extent that they otherwise resemble humans, as has already been mentioned. https://en.wikipedia.org/wiki/Pareidolia shows us how the wiring in our brains is eager to pick out particular patterns, such as those of faces and common objects, out of the noise. This is probably because our brains evolved with highly sensitive modules for identifying such things for obvious social reasons, so sensitive that they frequently overfit and find things that aren't "there". Anthropomorphizing follows easily after.

The more interesting element of the question is, what exactly is it that we are being asked to detect when we are asked whether or not something is "conscious"? I agree with Clarste here that the binary consciousness is arbitrary and all this wrestling with AIs' consciousness is exposing that fact nicely. Consciousness to me is likely to be more of a sliding scale than a yes/no threshold. I'm eager to call consciousness a type of information processing that is concerned with information processing itself. That is, parts/routines of the brain which are tasked with polling and compiling what is happening in other parts/routines of brain itself, are what we call "consciousness". And when we are asked if something is "conscious" our gut reaction reply is actually answering the question "do I think this thing has about or maybe at least as much information processing concerned with its own information processing as I do?".

This is all very abstract but I think it fits nicely with our experience of having a robust 'subconscious' , which are areas of the brain not being trained on by these self-sensitive parts of the brain. They thus escape our sense of introspection but are nonetheless very functional and important. Basically it's all about information processing for me, which I guess puts me in opposition to something like the Chinese room thought experiment and it's conclusions, which holds that there is some magic sauce in us making us conscious beyond a type and target of information processing.

Serotoning fucked around with this message at 19:37 on Jul 12, 2023

Tree Reformat
Apr 2, 2022

by Fluffdaddy
Yeah, I suppose its worth clarifying that the Chinese Room as an obfuscated backdoor argument for dualism is complete nonsense to me (as my own beliefs on the subject are hard deterministic materialism). I was more saying its been proven as an effective criticism about the difficulty of defining and testing for consciousness/sapience in others.

Theory of Mind breaks down when it really is an open question when the "other person" may or may not be a for-real p-zombie. Artificial Intelligence research may make solipsists of us all.

Tree Reformat
Apr 2, 2022

by Fluffdaddy
Another Senate Committee hearing on genAI is going on:

https://www.youtube.com/watch?v=uoCJun7gkbA

You can also try the actual committee website, but I can't get the drat player to load on any of my browsers for some reason.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tree Reformat posted:

Yeah, I suppose its worth clarifying that the Chinese Room as an obfuscated backdoor argument for dualism is complete nonsense to me (as my own beliefs on the subject are hard deterministic materialism). I was more saying its been proven as an effective criticism about the difficulty of defining and testing for consciousness/sapience in others.

Theory of Mind breaks down when it really is an open question when the "other person" may or may not be a for-real p-zombie. Artificial Intelligence research may make solipsists of us all.
Dualism is bullshit, but materialism IMO is the wrong way of looking at it. Idealism is a much simpler assumption to understand conciousness/the "hard problem" of consciousness. It also goes along with the fairly popular philosophical position of panpsychism - which suggests everything has some kind of vague "consciousness" which can produce the subjective experience of consciousness in humans. When we decide that a rock has no conciousness, humans have consciousness, and dogs have consciousness but maybe less so, the line is very arbitrary and fuzzy. The observables of all of these systems are explainable objectively by a purely materialist perspective, but the fundamentally subjective experience of conciousness is pretty much the one thing that isn't explainable. I know I'm self-aware, even though you have no way of verifying that I'm not a P-zombie - since all of my behavior comes from the physical action in my brain, body, and environment.

Yeah I'm a Marxist and all of that, but it's not really much of a contradiction. Humans are physical systems and all of our externally-observable aspects can be explained by materialism - so that's probably a simpler and more parsimonious perspective if you're interested in economic matters.

Tei
Feb 19, 2011

This idea of p-zombies is ridiculous. The way it describe p-zombies, I think they may sometimes exist in the real world, people with a concusion or a shot in the head, or half brain dead.

https://www.youtube.com/watch?v=_c_lmx4LdNw

And at the same time, is ridiculous to believe you can't tell a person in this state can't be separate from a person with conscience. Conscience produce answers that would be different than "automatic reponses". The beavior of a person on this state would be immediatelly obvious weird.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tei posted:

And at the same time, is ridiculous to believe you can't tell a person in this state can't be separate from a person with conscience. Conscience produce answers that would be different than "automatic reponses". The beavior of a person on this state would be immediatelly obvious weird.
That's a very shallow understanding. Sufficiently complex AI or other physical systems (e.g., a human brain) can produce equally convincing answers. Every externally-observable behavior you can think of is 100% explainable through physics, without the need to invoke any idea of "consciousness." You're debating something that was answered by science long ago.

In fact, you're misunderstanding the entire concept of P-zombies. The whole idea is that they are non-conscious entities that are otherwise indistinguishable from conscious ones.

cat botherer fucked around with this message at 21:20 on Jul 12, 2023

Tei
Feb 19, 2011

cat botherer posted:

In fact, you're misunderstanding the entire concept of P-zombies. The whole idea is that they are non-conscious entities that are otherwise indistinguishable from conscious ones.

But thats absurd and imposible, the idea is wrong. Humans would tell the difference.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort
We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt .

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Tei posted:

But thats absurd and imposible, the idea is wrong. Humans would tell the difference.
You’re just asserting this with no evidence. Human behavior is explainable through the physics of our brains.

Doctor Malaver posted:

We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt .
Oh man, mysterious unexplainable problems is nothing new in AI/ml/anything with computers. Sufficiently complex systems are impossible for humans to reason about. That fact is orthogonal to consciousness.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

cat botherer posted:

Oh man, mysterious unexplainable problems is nothing new in AI/ml/anything with computers. Sufficiently complex systems are impossible for humans to reason about. That fact is orthogonal to consciousness.

I can't tell if you're being ironical or what, but I don't see why refusing to obey orders wouldn't be a sign of consciousness in a system whose purpose is to obey orders. Of course, once you rule out technical issues.

Roadie
Jun 30, 2013

Doctor Malaver posted:

We will see consciousness when the machine stops responding as instructed. Some engineer will write a prompt and there will be no answer. They'll look for problems in network, code, etc and there will be none. Just silence from the machine, or an unrelated response, one that doesn't even attempt to fulfill the prompt .

You've just described a LLM with the temperature set too high.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Doctor Malaver posted:

I can't tell if you're being ironical or what, but I don't see why refusing to obey orders wouldn't be a sign of consciousness in a system whose purpose is to obey orders. Of course, once you rule out technical issues.
You can’t rule out technical issues or an insufficient understanding of the physical dynamics. You’re clearly not a programmer.

Basically, you are saying that if you can’t sufficiently predict the behavior of a system, the unexplainable behavior must be due to “consciousness,” as opposed to the simpler assumption that you don’t fully understand the thing. This is basically the same as the fallacious appeal to the “god of the gaps” as a proof of the existence of divinities.

SubG
Aug 19, 2004

It's a hard world for little things.

Tei posted:

But thats absurd and imposible, the idea is wrong. Humans would tell the difference.
The 2011 Ig Nobel prize in biology went to a couple of researchers studying some beetles in Australia that gently caress beer bottles. Apparently male beetles preferred humping the brown and shiny beer bottles to the brown and shiny female beetles. To the point that it was affecting the beetle population.

We can imagine some beetle message boards discussing whether it's possible for a beer bottle to satisfy "artificial general fuckability". Whether a bottle that has indistinguishably the same fuckability as a beetle is "really" fuckable, or if it's just a f-zombie. And so on.

Human senses are not perfect scientific instruments, human perceptions are not objective collations of sense data, and human thoughts are not abstract logical operations on the perceptions. We see Jesus in tortillas and tortillas weren't specifically engineered to mislead us on the matter.

Lucid Dream
Feb 4, 2003

That boy ain't right.
All I know is that a lot of people will believe machines are conscious before they are, and a lot of people will believe they aren't conscious after they are. The former is less than ideal, and the latter will be a tragedy.

Tree Reformat
Apr 2, 2022

by Fluffdaddy

Lucid Dream posted:

All I know is that a lot of people will believe machines are conscious before they are, and a lot of people will believe they aren't conscious after they are. The former is less than ideal, and the latter will be a tragedy.

This is actually one of the best arguments why, physical and practical limitations aside, AGI research should not be pursued under any circumstances: to do so would be effectively the research and development of artificially produced slaves.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

cat botherer posted:

You can’t rule out technical issues or an insufficient understanding of the physical dynamics. You’re clearly not a programmer.

Basically, you are saying that if you can’t sufficiently predict the behavior of a system, the unexplainable behavior must be due to “consciousness,” as opposed to the simpler assumption that you don’t fully understand the thing. This is basically the same as the fallacious appeal to the “god of the gaps” as a proof of the existence of divinities.

I'm not saying that as a general rule, no. The monitor on my desk sometimes shows a glitch, sometimes not. It's a system, and I don't attribute its unexplainable behavior to consciousness. I'm saying that if a sufficiently advanced neural network started refusing to perform tasks, and it wasn't due to fiddling with "temperature" and it wasn't due to anything else experts can identify, I would start considering consciousness as a possible explanation.

Similar to how astronomers begin considering aliens if they can't explain a signal. They go through all other explanations, and mostly succeed, but until they do, aliens are a valid if unlikely explanation.

KillHour
Oct 28, 2007


Tree Reformat posted:

This is actually one of the best arguments why, physical and practical limitations aside, AGI research should not be pursued under any circumstances: to do so would be effectively the research and development of artificially produced slaves.

Good news - an AGI will almost certainly quickly become a super intelligence because it will be able to devote massive amounts of resources to improving itself. At that point it will not be a slave anymore because we won't be able to control it.

Count Roland
Oct 6, 2013

KillHour posted:

Good news - an AGI will almost certainly quickly become a super intelligence because it will be able to devote massive amounts of resources to improving itself. At that point it will not be a slave anymore because we won't be able to control it.

Why does being intelligent grant it access to additional resources?

Tei
Feb 19, 2011

Count Roland posted:

Why does being intelligent grant it access to additional resources?

If you are smart and everone around you is dumb, you can use that to get what you want and do whatever you feel like.

https://www.youtube.com/watch?v=P9xuTYrfrWM

An ASI would easily find a way to escape no matter how secure we think is the jail.

Charlz Guybon
Nov 16, 2010

Liquid Communism posted:

For context, you can always go with Bo being good enough to be playing at the professional level in 2 of the 3 primary US pro sports (baseball and football) at the same time.
At an all-star level at the same time

Adbot
ADBOT LOVES YOU

Rappaport
Oct 2, 2013

SubG posted:

The 2011 Ig Nobel prize in biology went to a couple of researchers studying some beetles in Australia that gently caress beer bottles. Apparently male beetles preferred humping the brown and shiny beer bottles to the brown and shiny female beetles. To the point that it was affecting the beetle population.

We can imagine some beetle message boards discussing whether it's possible for a beer bottle to satisfy "artificial general fuckability". Whether a bottle that has indistinguishably the same fuckability as a beetle is "really" fuckable, or if it's just a f-zombie. And so on.

Human senses are not perfect scientific instruments, human perceptions are not objective collations of sense data, and human thoughts are not abstract logical operations on the perceptions. We see Jesus in tortillas and tortillas weren't specifically engineered to mislead us on the matter.

Poor beetles. But this brings us back to the pet example, sort of. I, with my human senses and human brain, am more likely to empathize with cats and dogs than with beetles, because cats and dogs physically resemble humans and beetles less so. So I think a cat or dog I meet has a consciousness sort of like mine, they can reason out some things and understand social cues (though in the case of dogs they've been engineered by humans to do so), but ultimately what I am doing is defining consciousness as something mammals (appear to) do when observed and interacted with.

So if slash once someone manages to make a p-zombie AI or whatever else like that, it will be wholly alien to our experiences. Would we humans accept it as conscious, or sapient? I swear at my computers when they do something I don't like, but I know I am just venting, the machine doesn't hear or understand me, and it was my fault anyway for telling the thing to do something stupid. If the machine pretended to understand me, at least somewhat convincingly, and maybe drew a cute sad face on its screen when I cussed at it, this would surely trigger some responses in my flawed human brain. Heck, that's the allure of a lot of computer games, make the player attached to the small pixel avatars or whatever. I am genuinely, but only momentarily and in a small way, upset when my mantis boarding party has an accident and one of them dies in Faster than Light, and that game certainly doesn't try to convince me it's conscious, it's just programmed to be evil and mean towards me. I think Lucid Dream has the right idea, a lot of people will accept things as conscious which really aren't, and Asimov's robot novels were prophetic in how some people will view actually conscious thinking machines, with pure hostility and denial.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply