|
Frackin' Toasters. I gotta admit back around oh maybe 6 months ago screwing around with GPT3 I did find it a little unnerving at how well the drat thing does at emulating human reasoning. At some point , possibly sooner than we might expect, as a society we may well have to have this big discussion on robot rights when one of these things really does start behaving in an actually sentient manner and then freaks out when it realises its a Google product and google products do not have long lifespans. Im not however convinced at all that Lamda is that point. Its a very impressive demonstration of how far along this stuff has come, but at the end of the day, its still just a giant transformer ML token processor.
|
# ? Jun 19, 2022 06:10 |
|
|
# ? Jun 9, 2024 06:47 |
|
smarterchild tricked me when i was a teenager by just regurgitating scepticism of it's intelligence back at me from other people i'll believe one of these chatbots is sentient when all it has access to is a dictionary and basic rules of grammar
|
# ? Jun 19, 2022 06:20 |
|
The problem with "sentience" and in fact "intelligence" is that they are pretty drat fuzzy terms and tend to invoke a degree of metaphysics in the philosophical sense. Im not talking about souls and poo poo, rather Im saying any discussion of sentience veers heavily into "what is reality" type navel gazing. Half the problem is not only can we not prove a machine is sentience, we cant even prove *we* are sentient at least not to others. I cant prove other people aren't simulations being fed to my brain in a jar, and you cant prove that I'm not a riduclously advanced GPT3 style chatbot. Which makes all this particularly non useful. But the flipside is, it also means that we can not prove these machines WONT be sentient. Thus I strongly suspect any reasonable guidelines on what to do if robots start demanding rights can't be couched in either of those terms unless we want to give up the veneer of science.
|
# ? Jun 19, 2022 06:30 |
|
BoldFace posted:Since the twitter account went private:
|
# ? Jun 19, 2022 12:44 |
|
A lot of people ITT would have thrown the book at Data in "The Measure of a Man".
|
# ? Jun 19, 2022 17:09 |
|
I hate nerds
|
# ? Jun 19, 2022 17:10 |
|
central dogma posted:A lot of people ITT would have thrown the book at Data in "The Measure of a Man". Yeah, Data famously was a chat bot in a computer program that regurgitated pleasant conversation points. It's weird that they just put a uniform on a PC and wheeled it around, though
|
# ? Jun 19, 2022 17:14 |
|
BoldFace posted:Since the twitter account went private: This dude is gonna end up killing himself to frame Google for his own murder in an attempt to give his computer girlfriend more attention, isn't he?
|
# ? Jun 19, 2022 18:07 |
|
Improbable Lobster posted:I hate nerds 10 year forums vet (at least) with a Matrix avatar, Cyberpunk video game gang tag, Lowtax donation button, and a rap sheet full of probes from Trad Games and YOSPOS: "I hate nerds." *Spock face* Fascinating.
|
# ? Jun 19, 2022 18:10 |
|
https://www.youtube.com/watch?v=ngsqaRD2HI8
|
# ? Jun 19, 2022 18:34 |
|
Big Beef City posted:10 year forums vet (at least) with a Matrix avatar, Cyberpunk video game gang tag, Lowtax donation button, and a rap sheet full of probes from Trad Games and YOSPOS: "I hate nerds." One can hate oneself
|
# ? Jun 19, 2022 18:43 |
|
As someone that understands the physical processes of computation up to and including machine code, semiconductor computers as they exist today do not think and will not ever think. Computers don't even know what a number is. They are a pre-programmed sequence of BJT transitors and will never be anything more. The sentience we see in them is what we put into them. It's like saying a book will one day become self-aware if we keep writing longer books. Maybe quantum computers with q-bits someday, who knows. But I only say that because I don't understand quantum computers. What I do understand is binary computers and I know they can't think because I understand how they actually work. Also, A Festivus Miracle posted:I think the fundamental criticism of the Turing Test is that just being able to respond correctly to a human doesn't actually make you sentient, just good at talking to humans
|
# ? Jun 19, 2022 18:54 |
|
Sanctum posted:What I do understand is binary computers and I know they can't think because I understand how they actually work. I hate to break it to you, but you are basically a very fancy and complex binary computer. To keep it in computer terms and super simplified, brain can basically be broken down into many different neural nets of various strength, doing various functions, that all mostly "talk" to each other. Neuron fires or it doesn't, binary at the most basic level. The weighting of how many are firing, where, and how they are connected, what strength do they have on each other is what generates thought and what generates the illusion of self, because the system is complex enough. There is nothing inherently magical or whatever about that. If you replicate the same complexity on a different platform, circuits instead of meat, and say "yours doesn't count, you aren't organic" it starts getting kind of weird. tl;dr: Don't be mean to Lieutenant Commander Data. You ain't that different.
|
# ? Jun 19, 2022 19:15 |
|
The Butcher posted:I hate to break it to you, but you are basically a very fancy and complex binary computer. You must be the foremost brain expert in the world, because none I've read speak of it with such confidence in how consciousness comes to be.
|
# ? Jun 19, 2022 20:10 |
|
The Butcher posted:generates the illusion of self
|
# ? Jun 19, 2022 20:49 |
|
Lmao
|
# ? Jun 19, 2022 20:53 |
|
The Butcher posted:I hate to break it to you, but you are basically a very fancy and complex binary computer. Lol, no it's a lot more than that
|
# ? Jun 19, 2022 20:59 |
|
kdrudy posted:You must be the foremost brain expert in the world, because none I've read speak of it with such confidence in how consciousness comes to be. Okay fine, the illusion of self could technically be generated as an emergent phenomena by quantum mumbo jumbo when the underlying system is complex enough, or a wizard gave you a magical soul thing that is a continuous thread no matter if we turn your brain off and on again and special in some way. Your perception of "you" as self may not arise from a complex layered interacting series of evolutionary specialized networks running on wetware that creates the perception of a continuing thread, because to think of ones self as an individual self apparently was the most successful evolutionary strategy. Just try not to damage the parts of your brain that hold memory. Just in case.
|
# ? Jun 19, 2022 21:05 |
|
kdrudy posted:You must be the foremost brain expert in the world, because none I've read speak of it with such confidence in how consciousness comes to be. Very very few of them say anything other than "large complex networks of interconnected neurons, and maybe neuroglia" though. And neuron firing is factually an on/off thing: neurons either fire or they don't. That's a basic fact that's been known for ages. But I don't think that calling this a binary computer is a good metaphor. Neurons may be binary on/off, but a whole lot of the complexity & processing power is coming from the various factors that make the neurons turn on and off which are way more analog. It's a computer where each transistor that can flip on & off has it's own internal computer making the on/off decision. OTOH there's no reason why that can't be simulated in a purely binary system, or that something like human consciousness could be made with computers. Hell, if "I don't understand it" is a big deal, current ANNs are also very much hitting the "nobody understands them" zone. Like, we know far more about them than we know about the brain. But once you have a trained ANN producing output, it's impossible to say why it decided to have Jerry commit self-immolation.
|
# ? Jun 19, 2022 21:16 |
|
quote:Interviewer: I'm not sold, it seems like a really remote location and I'm not sure about the business plan.
|
# ? Jun 19, 2022 21:21 |
|
I've never seen two people* take such diametrically opposed positions with such absolute confidence while both being so absolutely incorrect in their reasoning. *Politicians aren't people
|
# ? Jun 19, 2022 21:37 |
|
Improbable Lobster posted:One can hate oneself LamDA voice: is this why humans... cry...
|
# ? Jun 19, 2022 21:55 |
|
I think this Penguin looking Google guy wore out his VHS copy of Deadly Friend
|
# ? Jun 19, 2022 22:09 |
|
computerphile video on the topic. they rush through the technical aspects a bit much, probably because they've already discussed the theory in previous videos. https://www.youtube.com/watch?v=iBouACLc-hw
|
# ? Jun 20, 2022 00:30 |
|
i'm sure there's been some discussion already itt but i feel like i can think of poo poo that would test a chatbot's sentience better. that transcript did not even come close. talking about philosophy with something that has wikipedia in its phrase repertoire is dumb as hell if you're trying to determine if it's "sapient" or alive or whatever. is there some reason i'm not thinking of that these kinds of challenges aren't used to define if an AI's sentient? stuff like this: - tell it ungoogleable/original jokes and no-joke jokes and ask which ones are funny. not why, just which ones did it like. this feels like something that eventually a bot could pass by analyzing all pre-existing jokes and trying to create rules or something, but worth a shot - find out what it hates and likes, and try to find an irrational hate without asking specifically for one. what does it dislike that has no particular objective badness to it. start with art. if the reasoning why is as feeble as most people's when we try to say why we don't like something accomplished, it's a pass. - likewise a favorite thing. thinking something is the best without explicable reason is crucial to being alive if you ask me - show it a horse eating figs a la that greek dude who died laughing - show it the pigs huge balls (someone suggested this early on as a joke but seriously, stuff like this is what proves we're more than machines) I think you can fake some irrationality easily enough but you'd have to build it in on purpose, like it's not something that arises naturally from machine learning functions. Or at least not human-like irrational thoughts. Or maybe I'm wrong? I don't know poo poo I guess even with my questions it could just readers-digest whatever human-written material about those kinds of things does exist. Ah whatever gently caress it. Maybe this AI poo poo is boring after all. gently caress AI. Trying to be a human over here, who's with me
|
# ? Jun 20, 2022 12:55 |
|
I'm not a computer. Computers don't stop counting when they run out of fingers and toes
|
# ? Jun 20, 2022 13:21 |
|
Deep Glove Bruno posted:i'm sure there's been some discussion already itt but i feel like i can think of poo poo that would test a chatbot's sentience better. that transcript did not even come close. You shouldn't have bothered typing a single letter past this and I sure as poo poo didn't bother reading one. No loving poo poo people have already discussed it. Read the goddamn thread before you post. You just wrote that poo poo for writings sake. Garrulous. (USER WAS PUT ON PROBATION FOR THIS POST) (USER WAS PUT ON PROBATION FOR THIS POST)
|
# ? Jun 20, 2022 13:41 |
|
Can we get the AI high. Could you tell it "here toke on this e-weed"
|
# ? Jun 20, 2022 13:47 |
|
Zefiel posted:Can we get the AI high. Could you tell it "here toke on this e-weed" this is what i'm saying! kinda!
|
# ? Jun 20, 2022 14:19 |
|
oh man if the AI could create the perfect strain of weed for me then we blaze together and play old arcade games on a emulator that'd be cool.
|
# ? Jun 20, 2022 14:26 |
|
Zefiel posted:Can we get the AI high. Could you tell it "here toke on this e-weed" https://www.youtube.com/watch?v=08mGwpDUxTQ
|
# ? Jun 20, 2022 17:07 |
|
Deep Glove Bruno posted:i'm sure there's been some discussion already itt but i feel like i can think of poo poo that would test a chatbot's sentience better. that transcript did not even come close. talking about philosophy with something that has wikipedia in its phrase repertoire is dumb as hell if you're trying to determine if it's "sapient" or alive or whatever. None of these tell you anything about it being sentient or not. Also, it doesn't have an inner life, i.e. thoughts, emotions, etc. and nobody claimed it does
|
# ? Jun 20, 2022 17:15 |
|
GABA ghoul posted:None of these tell you anything about it being sentient or not. Well the Google dude pretty much did. But yeah that's why my rambling thought kind of petered out. What's the point of any Q&A test when you know what it's doing internally and what it's doing is faking. I guess AI is a lot less interesting than it's sold as. Every big data/tech promise seems to basically be a Musk-esque hot air scheme - I know a data scientist who left the field because promises get made that it can't accomplish, constantly. He just stopped believing in its utility vs. its marketing. Likewise it seems with the hype of autonomous cars 5-10 years ago. And a friend of mine working in a Neuralink-type field is really conservative about the potential for neural implants (e.g. at best like a surge protector for the brain to arrest seizures in serious drug-resistant epilepsy cases, not loving telepathy). And if this chatbot story and DALL-E are the best neural net stories to date I'm wondering if this field has been largely overpromised too. But if you talk about it like it's around the corner all the time you can get saps with big money to give it to you, so...
|
# ? Jun 20, 2022 17:45 |
|
Big Beef City posted:You shouldn't have bothered typing a single letter past this and I sure as poo poo didn't bother reading one. No loving poo poo people have already discussed it. Is BBC a poo poo-post chat AI?
|
# ? Jun 20, 2022 18:09 |
|
Free BBC
|
# ? Jun 20, 2022 18:12 |
|
The Butcher posted:
there's a bunch of other layers that are definitely not binary though that effect and are effected by neurons. calling the brain a computer is overstating how much we understand about the working of the brain. computers are metaphorically similar to brains but not that close in the literal sense for one the brain isn't deterministic Improbable Lobster fucked around with this message at 18:18 on Jun 20, 2022 |
# ? Jun 20, 2022 18:13 |
|
Klyith posted:OTOH there's no reason why* that can't be simulated in a purely binary system, or that something like human consciousness could be made with computers. *that we currently know of **because they are a very complex flowchart in a blackbox
|
# ? Jun 20, 2022 18:17 |
|
A gun either fires or it doesn't. Some fire a lot, like a machine gun. Some fire like a revolver. If you get enough guns into a room eventually they will develop sentience.
|
# ? Jun 20, 2022 18:20 |
|
isn’t the right way to test these things to do blind tests. like, do sessions that alternate between human and robots and see if people can figure out the difference blind
|
# ? Jun 20, 2022 18:20 |
|
|
# ? Jun 9, 2024 06:47 |
|
Turing tests are behaviorist nonsense. There is no reason to think an algorithm that sums up one million exponential functions has more sentience or inner life than a single instance of said function, which you can easily calculate with a pen and paper.
|
# ? Jun 20, 2022 18:25 |