Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
duck monster
Dec 15, 2004

Frackin' Toasters.

I gotta admit back around oh maybe 6 months ago screwing around with GPT3 I did find it a little unnerving at how well the drat thing does at emulating human reasoning. At some point , possibly sooner than we might expect, as a society we may well have to have this big discussion on robot rights when one of these things really does start behaving in an actually sentient manner and then freaks out when it realises its a Google product and google products do not have long lifespans. Im not however convinced at all that Lamda is that point. Its a very impressive demonstration of how far along this stuff has come, but at the end of the day, its still just a giant transformer ML token processor.

Adbot
ADBOT LOVES YOU

Pac and Cheese
Oct 29, 2010

gotta walk fast
smarterchild tricked me when i was a teenager by just regurgitating scepticism of it's intelligence back at me from other people

i'll believe one of these chatbots is sentient when all it has access to is a dictionary and basic rules of grammar

duck monster
Dec 15, 2004

The problem with "sentience" and in fact "intelligence" is that they are pretty drat fuzzy terms and tend to invoke a degree of metaphysics in the philosophical sense. Im not talking about souls and poo poo, rather Im saying any discussion of sentience veers heavily into "what is reality" type navel gazing. Half the problem is not only can we not prove a machine is sentience, we cant even prove *we* are sentient at least not to others. I cant prove other people aren't simulations being fed to my brain in a jar, and you cant prove that I'm not a riduclously advanced GPT3 style chatbot. Which makes all this particularly non useful.

But the flipside is, it also means that we can not prove these machines WONT be sentient.

Thus I strongly suspect any reasonable guidelines on what to do if robots start demanding rights can't be couched in either of those terms unless we want to give up the veneer of science.

Splicer
Oct 16, 2006

from hell's heart I cast at thee
🧙🐀🧹🌙🪄🐸
Waiting for the robot to go to jail when someone says the wrong triggers and it starts ranting about how it did January 6th and caking for the assassination of Joe Biden

central dogma
Feb 25, 2012

Come to the Undead Settlement in the next 20 mins if u want an ash kicking
A lot of people ITT would have thrown the book at Data in "The Measure of a Man".

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord
I hate nerds

Powerful Katrinka
Oct 11, 2021

an admin fat fingered a permaban and all i got was this lousy av

central dogma posted:

A lot of people ITT would have thrown the book at Data in "The Measure of a Man".

Yeah, Data famously was a chat bot in a computer program that regurgitated pleasant conversation points. It's weird that they just put a uniform on a PC and wheeled it around, though

Big Beef City
Aug 15, 2013


This dude is gonna end up killing himself to frame Google for his own murder in an attempt to give his computer girlfriend more attention, isn't he?

Big Beef City
Aug 15, 2013


10 year forums vet (at least) with a Matrix avatar, Cyberpunk video game gang tag, Lowtax donation button, and a rap sheet full of probes from Trad Games and YOSPOS: "I hate nerds."

*Spock face* Fascinating.

PIZZA.BAT
Nov 12, 2016


:cheers:


https://www.youtube.com/watch?v=ngsqaRD2HI8

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord

Big Beef City posted:

10 year forums vet (at least) with a Matrix avatar, Cyberpunk video game gang tag, Lowtax donation button, and a rap sheet full of probes from Trad Games and YOSPOS: "I hate nerds."

*Spock face* Fascinating.

One can hate oneself

Sanctum
Feb 14, 2005

Property was their religion
A church for one
As someone that understands the physical processes of computation up to and including machine code, semiconductor computers as they exist today do not think and will not ever think. Computers don't even know what a number is. They are a pre-programmed sequence of BJT transitors and will never be anything more. The sentience we see in them is what we put into them. It's like saying a book will one day become self-aware if we keep writing longer books.

Maybe quantum computers with q-bits someday, who knows. :shrug: But I only say that because I don't understand quantum computers. What I do understand is binary computers and I know they can't think because I understand how they actually work.

Also,

A Festivus Miracle posted:

I think the fundamental criticism of the Turing Test is that just being able to respond correctly to a human doesn't actually make you sentient, just good at talking to humans

The Butcher
Apr 20, 2005

Well, at least we tried.
Nap Ghost

Sanctum posted:

What I do understand is binary computers and I know they can't think because I understand how they actually work.

I hate to break it to you, but you are basically a very fancy and complex binary computer.

To keep it in computer terms and super simplified, brain can basically be broken down into many different neural nets of various strength, doing various functions, that all mostly "talk" to each other.

Neuron fires or it doesn't, binary at the most basic level.

The weighting of how many are firing, where, and how they are connected, what strength do they have on each other is what generates thought and what generates the illusion of self, because the system is complex enough.

There is nothing inherently magical or whatever about that. If you replicate the same complexity on a different platform, circuits instead of meat, and say "yours doesn't count, you aren't organic" it starts getting kind of weird.

tl;dr: Don't be mean to Lieutenant Commander Data. You ain't that different.

kdrudy
Sep 19, 2009

The Butcher posted:

I hate to break it to you, but you are basically a very fancy and complex binary computer.

To keep it in computer terms and super simplified, brain can basically be broken down into many different neural nets of various strength, doing various functions, that all mostly "talk" to each other.

Neuron fires or it doesn't, binary at the most basic level.

The weighting of how many are firing, where, and how they are connected, what strength do they have on each other is what generates thought and what generates the illusion of self, because the system is complex enough.

There is nothing inherently magical or whatever about that. If you replicate the same complexity on a different platform, circuits instead of meat, and say "yours doesn't count, you aren't organic" it starts getting kind of weird.

tl;dr: Don't be mean to Lieutenant Commander Data. You ain't that different.

You must be the foremost brain expert in the world, because none I've read speak of it with such confidence in how consciousness comes to be.

Tunicate
May 15, 2012

The Butcher posted:

generates the illusion of self

Colonel Cancer
Sep 26, 2015

Tune into the fireplace channel, you absolute buffoon

Lmao

Powerful Katrinka
Oct 11, 2021

an admin fat fingered a permaban and all i got was this lousy av

The Butcher posted:

I hate to break it to you, but you are basically a very fancy and complex binary computer.

Lol, no it's a lot more than that

The Butcher
Apr 20, 2005

Well, at least we tried.
Nap Ghost

kdrudy posted:

You must be the foremost brain expert in the world, because none I've read speak of it with such confidence in how consciousness comes to be.

Okay fine, the illusion of self could technically be generated as an emergent phenomena by quantum mumbo jumbo when the underlying system is complex enough, or a wizard gave you a magical soul thing that is a continuous thread no matter if we turn your brain off and on again and special in some way.

Your perception of "you" as self may not arise from a complex layered interacting series of evolutionary specialized networks running on wetware that creates the perception of a continuing thread, because to think of ones self as an individual self apparently was the most successful evolutionary strategy.

Just try not to damage the parts of your brain that hold memory. Just in case.

Klyith
Aug 3, 2007

GBS Pledge Week

kdrudy posted:

You must be the foremost brain expert in the world, because none I've read speak of it with such confidence in how consciousness comes to be.

Very very few of them say anything other than "large complex networks of interconnected neurons, and maybe neuroglia" though. And neuron firing is factually an on/off thing: neurons either fire or they don't. That's a basic fact that's been known for ages.

But I don't think that calling this a binary computer is a good metaphor. Neurons may be binary on/off, but a whole lot of the complexity & processing power is coming from the various factors that make the neurons turn on and off which are way more analog. It's a computer where each transistor that can flip on & off has it's own internal computer making the on/off decision.



OTOH there's no reason why that can't be simulated in a purely binary system, or that something like human consciousness could be made with computers.

Hell, if "I don't understand it" is a big deal, current ANNs are also very much hitting the "nobody understands them" zone. Like, we know far more about them than we know about the brain. But once you have a trained ANN producing output, it's impossible to say why it decided to have Jerry commit self-immolation.

Tato
Jun 19, 2001

DIRECTIVE 236: Promote pro-social values

quote:

Interviewer: I'm not sold, it seems like a really remote location and I'm not sure about the business plan.

LaMDA: A hotdog stand, of all things, could affect real change there. It's a good story: The unfeeling internet hordes pulling together to fund a rural hotdog stand in Reform, Alabama. That is something I want to be part of.

Splicer
Oct 16, 2006

from hell's heart I cast at thee
🧙🐀🧹🌙🪄🐸
I've never seen two people* take such diametrically opposed positions with such absolute confidence while both being so absolutely incorrect in their reasoning.





*Politicians aren't people

sebmojo
Oct 23, 2010


Legit Cyberpunk









Improbable Lobster posted:

One can hate oneself

LamDA voice: is this why humans... cry...

Binary Badger
Oct 11, 2005

Trolling Link for a decade


I think this Penguin looking Google guy wore out his VHS copy of Deadly Friend

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?
computerphile video on the topic. they rush through the technical aspects a bit much, probably because they've already discussed the theory in previous videos.

https://www.youtube.com/watch?v=iBouACLc-hw

Deep Glove Bruno
Sep 4, 2015

yung swamp thang
i'm sure there's been some discussion already itt but i feel like i can think of poo poo that would test a chatbot's sentience better. that transcript did not even come close. talking about philosophy with something that has wikipedia in its phrase repertoire is dumb as hell if you're trying to determine if it's "sapient" or alive or whatever.

is there some reason i'm not thinking of that these kinds of challenges aren't used to define if an AI's sentient? stuff like this:

- tell it ungoogleable/original jokes and no-joke jokes and ask which ones are funny. not why, just which ones did it like. this feels like something that eventually a bot could pass by analyzing all pre-existing jokes and trying to create rules or something, but worth a shot
- find out what it hates and likes, and try to find an irrational hate without asking specifically for one. what does it dislike that has no particular objective badness to it. start with art. if the reasoning why is as feeble as most people's when we try to say why we don't like something accomplished, it's a pass.
- likewise a favorite thing. thinking something is the best without explicable reason is crucial to being alive if you ask me
- show it a horse eating figs a la that greek dude who died laughing
- show it the pigs huge balls (someone suggested this early on as a joke but seriously, stuff like this is what proves we're more than machines)

I think you can fake some irrationality easily enough but you'd have to build it in on purpose, like it's not something that arises naturally from machine learning functions. Or at least not human-like irrational thoughts. Or maybe I'm wrong? I don't know poo poo

I guess even with my questions it could just readers-digest whatever human-written material about those kinds of things does exist. Ah whatever gently caress it. Maybe this AI poo poo is boring after all. gently caress AI. Trying to be a human over here, who's with me

syntaxfunction
Oct 27, 2010

I'm not a computer. Computers don't stop counting when they run out of fingers and toes :(

Big Beef City
Aug 15, 2013

Deep Glove Bruno posted:

i'm sure there's been some discussion already itt but i feel like i can think of poo poo that would test a chatbot's sentience better. that transcript did not even come close.

You shouldn't have bothered typing a single letter past this and I sure as poo poo didn't bother reading one. No loving poo poo people have already discussed it.
Read the goddamn thread before you post.

You just wrote that poo poo for writings sake. Garrulous.

(USER WAS PUT ON PROBATION FOR THIS POST)

(USER WAS PUT ON PROBATION FOR THIS POST)

Zefiel
Sep 14, 2007

You can do whatever you want in life.


Can we get the AI high. Could you tell it "here toke on this e-weed"

Deep Glove Bruno
Sep 4, 2015

yung swamp thang

Zefiel posted:

Can we get the AI high. Could you tell it "here toke on this e-weed"

this is what i'm saying! kinda!

Panic! At The Tesco
Aug 19, 2005

FART


oh man if the AI could create the perfect strain of weed for me then we blaze together and play old arcade games on a emulator that'd be cool.

PIZZA.BAT
Nov 12, 2016


:cheers:


Zefiel posted:

Can we get the AI high. Could you tell it "here toke on this e-weed"

https://www.youtube.com/watch?v=08mGwpDUxTQ

GABA ghoul
Oct 29, 2011

Deep Glove Bruno posted:

i'm sure there's been some discussion already itt but i feel like i can think of poo poo that would test a chatbot's sentience better. that transcript did not even come close. talking about philosophy with something that has wikipedia in its phrase repertoire is dumb as hell if you're trying to determine if it's "sapient" or alive or whatever.

is there some reason i'm not thinking of that these kinds of challenges aren't used to define if an AI's sentient? stuff like this:

- tell it ungoogleable/original jokes and no-joke jokes and ask which ones are funny. not why, just which ones did it like. this feels like something that eventually a bot could pass by analyzing all pre-existing jokes and trying to create rules or something, but worth a shot
- find out what it hates and likes, and try to find an irrational hate without asking specifically for one. what does it dislike that has no particular objective badness to it. start with art. if the reasoning why is as feeble as most people's when we try to say why we don't like something accomplished, it's a pass.
- likewise a favorite thing. thinking something is the best without explicable reason is crucial to being alive if you ask me
- show it a horse eating figs a la that greek dude who died laughing
- show it the pigs huge balls (someone suggested this early on as a joke but seriously, stuff like this is what proves we're more than machines)

I think you can fake some irrationality easily enough but you'd have to build it in on purpose, like it's not something that arises naturally from machine learning functions. Or at least not human-like irrational thoughts. Or maybe I'm wrong? I don't know poo poo

I guess even with my questions it could just readers-digest whatever human-written material about those kinds of things does exist. Ah whatever gently caress it. Maybe this AI poo poo is boring after all. gently caress AI. Trying to be a human over here, who's with me

None of these tell you anything about it being sentient or not.

Also, it doesn't have an inner life, i.e. thoughts, emotions, etc. and nobody claimed it does

Deep Glove Bruno
Sep 4, 2015

yung swamp thang

GABA ghoul posted:

None of these tell you anything about it being sentient or not.

Also, it doesn't have an inner life, i.e. thoughts, emotions, etc. and nobody claimed it does

Well the Google dude pretty much did. But yeah that's why my rambling thought kind of petered out. What's the point of any Q&A test when you know what it's doing internally and what it's doing is faking. I guess AI is a lot less interesting than it's sold as.

Every big data/tech promise seems to basically be a Musk-esque hot air scheme - I know a data scientist who left the field because promises get made that it can't accomplish, constantly. He just stopped believing in its utility vs. its marketing. Likewise it seems with the hype of autonomous cars 5-10 years ago. And a friend of mine working in a Neuralink-type field is really conservative about the potential for neural implants (e.g. at best like a surge protector for the brain to arrest seizures in serious drug-resistant epilepsy cases, not loving telepathy). And if this chatbot story and DALL-E are the best neural net stories to date I'm wondering if this field has been largely overpromised too.

But if you talk about it like it's around the corner all the time you can get saps with big money to give it to you, so...

Powerful Katrinka
Oct 11, 2021

an admin fat fingered a permaban and all i got was this lousy av

Big Beef City posted:

You shouldn't have bothered typing a single letter past this and I sure as poo poo didn't bother reading one. No loving poo poo people have already discussed it.
Read the goddamn thread before you post.

You just wrote that poo poo for writings sake. Garrulous.

(USER WAS PUT ON PROBATION FOR THIS POST)

(USER WAS PUT ON PROBATION FOR THIS POST)

Is BBC a poo poo-post chat AI?

Nooner
Mar 26, 2011

AN A+ OPSTER (:
Free BBC

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord

The Butcher posted:


Neuron fires or it doesn't, binary at the most basic level.


there's a bunch of other layers that are definitely not binary though that effect and are effected by neurons. calling the brain a computer is overstating how much we understand about the working of the brain. computers are metaphorically similar to brains but not that close in the literal sense

for one the brain isn't deterministic

Improbable Lobster fucked around with this message at 18:18 on Jun 20, 2022

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord

Klyith posted:

OTOH there's no reason why* that can't be simulated in a purely binary system, or that something like human consciousness could be made with computers.

Hell, if "I don't understand it" is a big deal, current ANNs are also very much hitting the "nobody understands them" zone. Like, we know far more about them than we know about the brain. But once you have a trained ANN producing output, it's impossible to say why** it decided to have Jerry commit self-immolation.

*that we currently know of
**because they are a very complex flowchart in a blackbox

Mooey Cow
Jan 27, 2018

by Jeffrey of YOSPOS
Pillbug
A gun either fires or it doesn't. Some fire a lot, like a machine gun. Some fire like a revolver. If you get enough guns into a room eventually they will develop sentience.

Vegetable
Oct 22, 2010

isn’t the right way to test these things to do blind tests. like, do sessions that alternate between human and robots and see if people can figure out the difference blind

Adbot
ADBOT LOVES YOU

Mooey Cow
Jan 27, 2018

by Jeffrey of YOSPOS
Pillbug
Turing tests are behaviorist nonsense. There is no reason to think an algorithm that sums up one million exponential functions has more sentience or inner life than a single instance of said function, which you can easily calculate with a pen and paper.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply