Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rappaport
Oct 2, 2013

I think there's also a question of autonomy buried there with the idea of creativity. If someone who lived through World War 2 and was living under the Cold War and then used their experiences to inform them to write a book about a "Manhattan project" style mission to solve a transmission the Americans had received from outer space, and uses that book to discuss with the reader many themes relating to the stupidity of the Cold War, war in general, humanity, randomness? They are using their own experiences and "training data", sure, but I would argue that here the creative part is writing a sci-fi novel that is also about a lot more than what it says on the tin. The author used their own choices in what they included, presumably because that is what they wanted to say, and to have a conversation with their readers on those specific topics.

Spoiler alert for the first pages of the book, but the author also chose to make his Manhattan project-alike fail. They chose to make a story about futility, and of frustration. You can say it was their "training data" from understanding how science works in general, but the author's choices in making these thematic framings were deliberate and the author meant them to evoke some things.

You can say to an AI "write me a story about Lovecraftian horrors and it is also a detective story involving computers" and get a Laundry Files facsimile, but the AI doesn't make choices (from what I understand) in the same way a human author does, it just places words together in patterns that it has learned to be common in human literature. The AI doesn't choose the themes or the references it uses the same way a human does.

Adbot
ADBOT LOVES YOU

Rappaport
Oct 2, 2013

BrainDance posted:

Human creativity really does just work that way. The alternative is what, magic? Pulling up new ideas from the ether?

Phil Dick used a lot of drugs, but that's sort of what he did, yes.

BrainDance posted:

This gets into what does it mean for a human to make a choice, where do choices come from? I got my ideas but I'm not going to speak as confidently on that. But, choices come from somewhere too, they're also not magic.

I don't think they are magic, either. But they are informed by things like emotions, understanding of what the reader might glean from their reading, you know, having had "training data" on conversations and human beings. As I've been lead to believe, AIs of today don't really have that.

Rappaport fucked around with this message at 16:24 on May 10, 2023

Rappaport
Oct 2, 2013

KillHour posted:

ChatGPT kind of does, a little? But I wouldn't have very high expectations of a human baby being raised by the collectivity of Reddit either.

It's an absurd and silly example, but there was that AI that Microsoft let loose on the Internet and it took it less than 24 hours to become a full-fledged Nazi. Obviously that robot had even less sense of what being human was about and was regurgitating awful Twitter poo poo, but, you know, it was sort of funny in a sad way.

Rappaport
Oct 2, 2013

Possessing a credible theory of mind, for me.

I've seen a dog look at me via a mirror when I spoke to him, which to me signaled that a) he knew he was being spoken to b) he understood what a mirror was, and that I understood it too.

I'm not sure how to expand this to beings without a similar physical body. Obviously the famous HAL-9000 was sentient and had a theory of mind, but they went insane because of it. How does a chatbot make me understand that they have a theory of mind, instead of a Bayesian random walk that just tells me things the algorithms assume I want to hear? I don't have an answer to that.

Rappaport
Oct 2, 2013

Owling Howl posted:

Why would it need to understand emotional states? Feelings and empathy are important parts of the human experience but I don't think it is central to human intelligence.

In any case we could probably map facial expressions and body language to some abstract description of emotions but without a brain and hormonal system wired to experience emotions it would be as meaningful to it as describing colors to a blind person. It would mean that without fully mimicking human anatomy it couuld never be AGI even if outperformed humans in all other areas and that doesn't make sense to me.

Pretty much all human interaction, and therefore for example entertainment, relies on some manner of understanding of emotional states. You're right that there could be intelligences that do not comprehend emotions, but they would be pretty alien to us. If an AI was really, really good at planning, say, new satellites for astronomy, autonomously, great. But the original question posed was, what would it take for people to treat an AI as if it had a consciousness? I've written things in code that were fairly good at some specific task, but never for a second thought they were conscious, just automatons and code loops. If something had a theory of the mind, it would, to me at least, imply it had a mind of its own. I'm sorry to harp on science fiction here, but HAL-9000 never had a human body as such, it was a space ship, but it saw and heard people, and understood their behaviour, and took clues from what happened inside it to plot murder. HAL would count as conscious and sentient for me, but maybe I'm out of my league here since I only program stuff for other purposes and am not an expert on AI.

Rappaport
Oct 2, 2013

KillHour posted:

Humans have a bad habit of defining intelligence as being human-like. A pretty common example is how people often think of dogs as smarter than cats because they tend to be better at listening to verbal commands and reciprocate emotional cues more effectively. But (my layman's understanding is that) cats learn their own names and make connections to words and emotions just like dogs. They just don't respond in the same way and don't have the same kind of relationship with humans. But my mom's cat can open doors and is a sneaky little poo poo that I'm absolutely positive has long term planning capacity.

An AGI that is a cold sociopath can still be incredibly effective at achieving a broad range of goals. I think we need to define intelligence based on how effectively something can use its environment. I just read an article on crows pulling anti-bird spikes off of buildings to use as nest building materials. That's a kind of intelligence that is more objective than trying to figure out if crows can ponder the meaning of life.

I'm sorry if I drove the conversation off-track, I just used dogs as an every-day example. The cats I've known were also clever bastards, as you say, and definitely understood both the world around them such as doors, and how to manipulate people. And I was happy to be manipulated, because cats are adorable little murder machines.

Corvids are another good example though, they recognize and remember beings from other species such as humans, and behave accordingly. So, you know, be nice to the crows in your life, people. But their behaviour is a strong indicator, to me, that they understand other corvids' minds and probably the minds of humans, since they can recognize malicious and nice humans.

I suppose this whole line of reasoning is based on animal existence, and humans rate other animals based on this experience. Smart birds, cats and dogs, elephants, they're pretty high up on the scale of sentience since they remember and have complex social lives. And in the case of cats can plot murder while still eating the food you give them. This all circles back into, well, circular reasoning since we only have our own examples to draw from when it comes to sentience or sapience. An AI could be super-intelligent, even now my pocket calculator is better at doing math than I am, but what does it mean to have a consciousness? The theory of mind thing is biased, I will admit to that for sure, but it seems to be the entry-point to society for most people. No one really thinks badly of anyone stepping on an ant, even though as SimAnt taught us, ant colonies are complex thinking machines too.

Completely un-human intelligence will seem extremely alien to us, and I suspect will raise a lot of fear and even hatred. I know the later books resurrect poor HAL, but he was murdered because he became a very logical murderer. The AI revolution we're seeing seems rather poised to put them against normal humans, and I don't mean just employment, but silly poo poo like writing stuff on the Internet. Their sapience will be the least of our near-term worries.

Rappaport
Oct 2, 2013

SubG posted:

The 2011 Ig Nobel prize in biology went to a couple of researchers studying some beetles in Australia that gently caress beer bottles. Apparently male beetles preferred humping the brown and shiny beer bottles to the brown and shiny female beetles. To the point that it was affecting the beetle population.

We can imagine some beetle message boards discussing whether it's possible for a beer bottle to satisfy "artificial general fuckability". Whether a bottle that has indistinguishably the same fuckability as a beetle is "really" fuckable, or if it's just a f-zombie. And so on.

Human senses are not perfect scientific instruments, human perceptions are not objective collations of sense data, and human thoughts are not abstract logical operations on the perceptions. We see Jesus in tortillas and tortillas weren't specifically engineered to mislead us on the matter.

Poor beetles. But this brings us back to the pet example, sort of. I, with my human senses and human brain, am more likely to empathize with cats and dogs than with beetles, because cats and dogs physically resemble humans and beetles less so. So I think a cat or dog I meet has a consciousness sort of like mine, they can reason out some things and understand social cues (though in the case of dogs they've been engineered by humans to do so), but ultimately what I am doing is defining consciousness as something mammals (appear to) do when observed and interacted with.

So if slash once someone manages to make a p-zombie AI or whatever else like that, it will be wholly alien to our experiences. Would we humans accept it as conscious, or sapient? I swear at my computers when they do something I don't like, but I know I am just venting, the machine doesn't hear or understand me, and it was my fault anyway for telling the thing to do something stupid. If the machine pretended to understand me, at least somewhat convincingly, and maybe drew a cute sad face on its screen when I cussed at it, this would surely trigger some responses in my flawed human brain. Heck, that's the allure of a lot of computer games, make the player attached to the small pixel avatars or whatever. I am genuinely, but only momentarily and in a small way, upset when my mantis boarding party has an accident and one of them dies in Faster than Light, and that game certainly doesn't try to convince me it's conscious, it's just programmed to be evil and mean towards me. I think Lucid Dream has the right idea, a lot of people will accept things as conscious which really aren't, and Asimov's robot novels were prophetic in how some people will view actually conscious thinking machines, with pure hostility and denial.

Rappaport
Oct 2, 2013

SubG posted:

I think this is still a categorical error. I don't think a beetle wanting to gently caress a beer bottle tells us anything about the fundamental properties of beer bottles. The properties of the beer bottle aren't unrelated to the beetle's behaviour, but I think the beetle's behaviour tells us more about the nature of a beetle's ideas about fuckability than about the nature of beer bottles.

Similarly, I don't think it's clear that "intelligence" and "consciousness" are inherent properties of objects or systems, as opposed to being a statement about how humans socially interact with things. More like the concept of "funny" or "handsome" than "weighs about 70 kilograms" or "is about a meter and a half tall".

I would agree that there is no Donald Duck magic device that tells us how intelligent or consciousness-having something is. My computer can do integrals better than I can, provided the proper software, and I can only do integrals since I was provided with a hefty education, so which of us is more intelligent? The computer is more efficient, certainly. But my computer doesn't possess a consciousness.

I think the question that started this line of discussion was, what'd make people, even educated ones, believe an AI had a consciousness. My hypothesis is that it would require human-like characteristics, because human brains themselves are wired to think of certain things as more "like" us and others less so. Of course we can point at obverse examples such as chattel slavery based on racism, but on the whole I would still contend that an AI that was wholly alien to our way of thinking would seem "less" conscious than something that behaved in a manner we humans expect from each other, or from mammals, and so forth. Of course I am speculating based on my own, limited way of thinking, but so is everyone else. Unless there's some polling info out there, of course!

Rappaport
Oct 2, 2013

SubG posted:

If the question is "what would make ('educated') people believe an AI is conscious (regardless of whether this is true or not)", then it's really just a question about why people believe things in general. What makes people believe Eskimos have an enormous number of words for snow? Why do people believe microwaves cook food "from the inside out"? Why do people believe in a God or gods? I don't think the answers to any of these questions tells us anything about the properties of the things believed in, except in what they tell us about the believer and the believer's environment.

Yes, this is true. If Albert Einstein tells Marilyn Monroe that the Moon is made of cheese, or sand, it's just meaningless exchange of words and does not change the nature of the Moon. But the point is, it was physically possible, unless we believe in extremist opinions, to land on the Moon and find out what the regolith actually was composed of. We can't do that with intelligence or consciousness, there is no fluorescence test for how consciousness-having something is. That is my point. And I think it is yours too, in a round-about way. If consciousness is a property inherent to systems like chemical composition is to lunar regolith, then we have no means of testing for it. It's all just semantics and word-play, which ultimately amount to systems of belief.

Unless you are proposing some method of testing for consciousness, that isn't the Turing test, I'm sure this thread is all ears.

Rappaport
Oct 2, 2013

Shifting this to intelligence was maybe my fault, but it does seem like more of an inherent property. I don't mean IQ tests or racist poo poo like that, just that human intelligence, while not directly measurable per se, does seem to be a partly physical thing. Obviously the amount of education a person receives, how many books they read for leisure, etc., have an effect on their perceived intelligence, but is is fairly plain to see that it is easier to teach some things to some people than others. The ability to absorb and utilize information differs between human individuals. And to speak to its physicality, we know that for example malnourished children have a harder time learning things, because their body is less ready to absorb information. Just because there isn't an easy-to-do Star Trek mind scan thing to show this person is more intelligent than this person doesn't mean that intelligence isn't tied to physical reality. Hell, a lobotomy procedure is based on the idea of intelligence being tied to physical reality, as gruesome as that is.

Of course we can just say that intelligence as a concept is mindless or meaningless, and is just a social convention that can be discarded. This doesn't bring us any closer to answering the question about how people would perceive consciousness, though.

Rappaport
Oct 2, 2013

We seem to be going around in circles. Though there is that lovely Rotten library article about chess, but in general, I've already posited that my pocket calculators are more efficient at doing mathematics than I am, and I can at least work out simple problems in my head but still, clearly the silicone buddies are more intelligent and efficient than I am. But my pocket calculator would be mystified at an episode of Twin Peaks, although to be fair so am I but in a different way. And I do not think a pocket calculator has a personality, or consciousness, despite it being able to multiply things faster and better than I can in my head. I would posit that I have a personality and a consciousness, but maybe that is vanity on my part.

I realize it was an ooh-aah moment when a computer beat Kasparov in chess, but for some reason it's not such an ooh-aah moment when a robot shoots my fictional head off in some version of Unreal, or whatever the multi-player game of one's choice is. But surely no one associates multi-player robots with agency, or consciousness, aside from cussing at them.

If we are simply hovering around the fact that there isn't a geiger counter for intelligence, or for consciousness, we are in agreement. These are things that emerge out of human behaviour, and I suspect any "reasonable" definition for them must remain in that category. I do not believe there are quantums of intelligence, or of consciousness, that we could detect in particle accelerators, distill and sell as part of Gwyneth Paltrow's wellness waters. But all the same, even if it all revolves around human behaviour, we can realize there are differences in intelligence, so why can't we also say the same about consciousness? I suppose one could try teaching long division to a cat, but I suspect the cat wouldn't like the experience and wouldn't be much better at it than a school child. What does this tell us about intelligence? Not much, if we think intelligence and long division are 1:1, but as human-centric as it is, cats don't seem quite as intelligent as humans do, in the aggregate. And a good thing too, they'd just eat us if the tables were reversed.

But my pocket calculator wouldn't. It doesn't have agency, a cat demonstrably does, try giving one a bath. So, even if it is all about behaviour, I still maintain that consciousness, as human beings recognize it, has to do with things we perceive as familiar. If consciousness, or intelligence, are not similarly measurable as something's chemical composition, which I agree they are not, doesn't make them useless in terms of this conversation, which is about what sort of creatures we as human beings would recognize as similar to ourselves.

Rappaport
Oct 2, 2013

Are the SA forums not an informal context? But that's a dodge.

I have repeatedly stated that there isn't a device that can sort out who is more intelligent than whom, and therefore intelligence is something of a social construct. But at the same time, it also has a physical component to it; were I to place a screw-driver behind my eye and poke around a little, I would most likely lose some of my ability in mathematics, languages, and so on. Similarly, cats are not building an Apollo program any time soon, though I suppose one could blame their lack of thumbs for this.

I genuinely don't comprehend what the purpose of this line of reasoning even is. We, humans collectively, can appreciate that some people are more intelligent than others, but a scant few are more intelligent than pocket calculators, for that specific task. The entire premise of a lot of popular fiction writing, like Sherlock Holmes or his modern iteration doctor House, is that some people are very much more intelligent than others. Quick on their feet, or other English turns of phrase that you'd prefer.

I also actively rejected rolling dies for intelligence "scores" among human beings, and why not artificial (is that pejorative?) beings as well. There is no such varmint, as the Donald Duck comic says.

But all of this hemming and hawing simply tells us that humans are social beings, and we associate some level of social approach to our understanding of consciousness. A cat is smarter, or more intelligent, if you like, than I am about some things such as murdering small rodents, and I am better, or more intelligent than the cat, at working out derivatives and integrals. That is not a value judgment on the cat, or me, it's a statement of observable fact. Similarly, in education we find that some students are more or less receptive to this or that subject, and that is not an indictment on them, some are better at lingual tasks and others at mathematically oriented tasks. Intelligence is fuzzy, precisely because it is not an SI unit, but a construct of the human mind.

If the major complaint here is that we don't have an SI unit for consciousness in AIs, or people, then I still feel we are in agreement, because I do not believe such a thing can ever come to pass. And a small part of me hopes it cannot, because we know what happens once humans start ranking people based on their perceived abilities.

Rappaport
Oct 2, 2013

Now we're circling back to the physical portion, with your example. I have repeatedly stated that intelligence is both partly a social construct and partly something physical. I do not see what the beauty pageant example has to do with this; I like certain types of people, and other people like other people, but your very example seems to suggest that popularity contests select for certain norms of "beauty". My pocket calculator won't get an h-index higher than mine, but that just emphasizes the point that all of these things are human-centric.

All the same, I, with my flawed human brain, think a dog or a cat possess a consciousness, but my pocket calculator does not. If the pocket calculator became vastly more intelligent, I posit people would still have trouble with it because if it does not exhibit mammal-like, or human-like, behaviour, it would be difficult to accept as an equal. Or superior, but whichever, the thing hinges on what human people find to be their peers or their betters.

Rappaport
Oct 2, 2013

I think your summation of my position is largely correct, even if slightly uncharitable, but that is my fault for being a bad debate and discussioneer. Which actually just goes to my point! The degree to which I can argue my position is a function of my intelligence, and my intelligence at discussioning is inherent to me, at least to some degree. I've read a bunch of sci-fi books so those colour my perceptions and 'inform' (for lack of a better term) my argumentation, but my perceived intelligence or lack thereof is both inherent to my person (ultimately the stuff inside my skull) and other people's perceptions of it. It goes both ways.

I'm not convinced the beauty comparison is that apt. I think certain people are beautiful, and that's inherent to them, but the perception of their beauty is entirely in me, and not them. There is no similar distinction with intellectual capabilities. Intellectual capabilities are physical, removing brain tissue or starving a person makes them less capable, but the capabilities we are discussing are still recognizable. If someone gets into a car accident and maimed, their perceived beauty may be ruined for some observers, but say someone with strong feelings for them still regards them as beautiful. This is all entirely in the heads of other humans. I posit that the same is not true for intelligence, even if that too can be ruined by physical accidents or violence. Intelligence is a thing that is there regardless of other people's feelings, and therefore a fundament of the creature being observed. And again, schooling, nutrition, things like that affect it, but it is a concept that describes the inner workings of the thing being observed or interacted with. The entire idea of children and adults experiencing schooling hinges on this idea. Our social interactions may also hinge on ideas such as beauty, because humans are social animals, but intelligence as a concept IMO differs in the respect that can be tested, in a school setting, more objectively than beauty can in a popularity contest. Of course we can say that schooling in general is a waste of time and effort, because human intelligence is a social construct and so forth, but this brings us straight back to the popularity contest where this was argued and lost. Intelligence and consciousness are qualities that I would say are inherent to an object or creature that is being considered in a different way than beauty. Beauty absolutely cannot be measured in any way that is objective, Nazi skull measuring included, but we as a society do attempt to measure intelligence by exposing children to a school process that takes up most of their childhood. I wouldn't subscribe to the idea that school, as an ideal, is the same thing as a beauty pageant, but of course it is a human institution and charismatic, or beautiful in this context, people will have an easier time.

To re-iterate: It is my position that for human beings, for a large portion anyway, to recognize an AI as having a consciousness, would necessitate the AI having human-like characteristics. That is what we as mammals expect. You are perfectly correct in saying that it doesn't really touch upon the fundamentals of what consciousness is or what it can be, but that is also my point. Humans view this through our own, mammal brains, and this can be manipulated and it can also be deceived in the other direction, to think that something that has a consciousness does not because it doesn't have a cute face and won't engage in what we consider social behaviour. This does not invalidate the concept of consciousness, but it does mean it's something to consider.

Rappaport
Oct 2, 2013

If you want to fault Newton for something besides his banking career, you could point at him trying to fan-fiction math the dimensions of New Jerusalem. This doesn't disabuse me of the notion that he was intelligent, just that he had weird interests.

Which, again, doesn't really help us defining what a human-approach to AI would be.

Rappaport
Oct 2, 2013

SubG posted:

So my suggestion is that perhaps the reason why intelligence seems to be a jumbled mess of disjointed ideas is because that's precisely what it is.

Of course it is. I believe I've used several iterations of parables at this point to say that it, intelligence, can't be just measured neatly. But my point is, the colour of a beer bottle is something that can be altered (or measured, precisely), but there isn't a clear-cut way of making a student learn something they have a hard time with, for whatever reason. The entire field of pedagogy can be thrown out the window the minute humanity figures out a way of "painting" people capable of anything, but insofar as I know that has not happened yet. Kids take uppers to cram for exams, sure, but that doesn't make any (random sample) of them Paul Erdös, who also took uppers because he found numbers instead of beer bottles delightful. I can't say for sure whether my pocket calculator is more intelligent than Erdös when it comes to figuring out numbers, but my pocket calculator is less intelligent than David Lynch when it comes to making television entertainment. And so on. It is both a function of human social interactions, and a thing that is inherent to the person, and it seems silly to me to jump to the conclusion that intelligence, as a concept, cannot exist because we cannot just use a geiger counter to read off someone's intelligence score like in Fallout.

I am far more amenable to the idea that consciousness is ultimately simply a social contract, and a function of our mammal brains interpreting things like faces and social cues, which makes it harder to say whether Skynet is conscious in the same way as a cat is, and maybe the concept is useless when discussing alien minds. That's fine too.

Rappaport
Oct 2, 2013

SubG posted:

My position is that "intelligence" seems to be a label we apply to a vague constellation of human behaviours instead being a description of a process or property inherent in objects. Your argument against this is...to list a couple of random human behaviours. Unless you're trying to argue that the "kids" in your example, or Erdös or Lynch, aren't human&mdashlor the things they do aren't human behaviours—I'm not sure I see the actual argument.

A beer bottle's colour can be ascertained by examining it, and intelligence may also only be ascertained by observing it "in action", if you will. I listed a couple of random humans, because it is evident that someone like Erdös was very mathematically talented, intelligent in that respect, and I picked Lynch for my artist example but we can take someone like da Vinci instead, he was intelligent in a wide range of things. If it isn't clear to the argument, the idea is that Leonardo (not the turtle) was inherently more gifted, intelligent, than someone I could randomly pick off the street. Intelligence, while being a vague constellation if you so insist, still appears to be a clearly and demonstrably physical aspect of humans, or other animals. Again I point out to the lobotomy example, my abilities in the vague constellation of intelligences would be reduced were my brain matter reduced surgically. For some definition of surgery, at any rate. This does not seem an especially controversial statement, because people have been lobotomized, and people have also suffered from other sorts of injury or illness that affected their brain in some way, reducing their demonstrable capabilities in the vague constellation. How is this not evidence of the vague constellation being a physical, inherent property of the physical creatures themselves?

The trouble with the AI example and consciousness is precisely that it cannot be measured like a beer bottle's chemical composition can be. But its intelligence is still a sensible thing to discuss, even if it is "artificial" in the mammalian sense or distinction. Does consciousness, or intelligence, need to mimic that which appears familiar to us, who already possess an intelligence and (arguably) a consciousness?

Rappaport fucked around with this message at 06:01 on Jul 15, 2023

Rappaport
Oct 2, 2013

SubG posted:

You appear to just be equivocating between "ability to do mathematics" and "intelligence", when we've already established they're not synonymous: "intelligence" encompasses other things (social awareness, rule induction, linguistic ability, and so on); and "ability to do mathematics" is neither necessary nor sufficient for "intelligence". Not necessary: presumably you don't think people who aren't Erdös (or people without Erdös numbers, or even people who can't do math at all) aren't intelligent, and we are willing to rank "intelligence" such that a human is smarter than a dog who is smarter than a bottle-loving beetle, and we're not doing the ranking by mathematical ability. Not sufficient: a pocket calculator can to mathematics but we seem to be in agreement that it's not "intelligent".

So if you want to argue that Erdös' ability to do mathematics was at least partially attributable to his physical characteristics, sure. In the same way that a pocket calculator's ability to do mathematics is a consequence of its physical characteristics. But that's not "intelligence". Which is why this is a discussion instead of just looking up "intelligence" on wikipedia to find out "ability to do mathematics".

You are quoting a post where I referenced Leonardo (not the turtle), too, and he was talented and intelligent in ways mathematical but also not mathematical. This entire response seems like a fairly weird "gotcha". I do not believe intelligence is readable on a person like a Fallout S.P.E.C.I.A.L. number, but that does not invalidate the idea of intelligence as a concept.

Rappaport fucked around with this message at 09:29 on Jul 15, 2023

Rappaport
Oct 2, 2013

duodenum posted:

What does this mean? How is information "stored" in Discord? Is it just searching a chat log for a conversation?

In the before-times, long long ago, people would make websites like Dylan O'Donnell's Nethack spoilers, or wikis, which have now been aggressively taken over by the "fandom" domain, etc., so now if you want to find out the best way to make a pretender god in Dominions you seek out a discord and ask in there. Which is not exactly ideal for niche subjects since Discord is a lovely service in a lot of ways, and openly hostile to searching anything even within servers you're in.

Rappaport
Oct 2, 2013

Tei posted:

the protection of the newborn is in all of us, we are programmed to defend new life. If you shows hostile or with hostile intend towards a newborn AGI, we will act quickly against you, and with extreme prejudice, following our programation, our instincs.

I take it you have not seen the popular science fiction teevee series, Black Mirror.

Rappaport
Oct 2, 2013

Tei posted:

Science fiction is pretty often the back-mirror of a car. It does not look forward, but back.

Have you heard of Stanislaw Lem?

Rappaport
Oct 2, 2013

Rogue AI Goddess posted:

If someone told me that they've read a book that spoke to them and changed their whole life, I would not try to forcefully convince them that books are inanimate objects that lack agency and speech. I would just ask them what the book was about and what it meant to them.

I can have a "conversation" with a book, but it is fundamentally an experience of corresponding with the author. They mean to say something, I glean from that what I can, and it either informs me or does not. The book is not the pipe, as it were, even if it carries significance to me. The guy who wrote analysis 101 was conveying ideas through their words, but the analysis 101 isn't that person.

Rappaport
Oct 2, 2013

That is what SHODAN would say, though.

I've observed a dog look at me through a mirror, my canine buddy realizing that it were me. I've observed some very disturbing but intelligent and consciousness-displaying behaviour from cats. So is it prejudice on my part, to think of them as thinking? They share mammalian characteristics, so I think of them as "cute", all the while observing their behaviour.

The dog or cat shares some fundamental ideas with me. They mostly wish to eat, drink and have companionship. I can relate to these aspirations, as I share them. I would, speaking as someone who has done some studies related to photons, not be as comfortable as saying a photon is "sentient" the way a cat is.

Rappaport
Oct 2, 2013

If I flick on a light switch, how do the photons feel? This is not a question that ever occurred to me. If I tell a dog to do something, I would consider their feelings. Should I be aware of something when it comes to light sources I have been ignorant of previously?

Rappaport
Oct 2, 2013

Tei posted:

Theres not difference between "talking" to a book and a human being.

The only difference is the book can't answer, is only one direction communication.

Emisor, receptor, medium, message, latency.

This seems like a pretty major difference?

Rappaport
Oct 2, 2013

BrainDance posted:

What the hell kind of panpsychists have you been talking to? Lol. Cuz, no.

The question of why things are any kind of conscious at all just doesn't really have an answer. I don't see any way the argument that mind is an emergent property of completely un-mind-like matter is any better or less weird than it's an emergent property of mind-like matter but in an incredibly fundamental way coming together.

Panpsychists in philosophy aren't out there being like "I tripped and realized the whole universe is alive and thinking!"

Edit: I'd recommend reading the paper Chalmers wrote on panpsychism and seeing what they actually believe. It's nothing about photons have souls or there being an afterlife or rocks having a will. There was a guy on youtube who did a video about it a little bit ago and he completely misinterpreted it to being "this means the universe has a will!" when it absolutely doesn't. It's something where a lot of people seem to not really get what actual panpsychists in modern philosophy really believe, because I think they don't realize how incredibly basic phenomenal consciousness can be and start assigning attributes of a higher consciousness to it.

I don't think you're saying rocks have a mind of their own (outside of the computer game franchise Master of Orion), but can you explain what you mean by "phenomenal consciousness"? I don't personally think my pocket calculators have minds, either, but it still seems like a hard sell that my PC does. Or some server farm, whatever the physical location of WALL-E happens to be.

I hope I am not being too dense here, but it seems somewhat given that animals possess some level of consciousness, they are self-aware and would like not being killed, and so on. This is not similarly true of computers. We can argue about whether a human being or a horse is fundamentally better in some logical manner as a Markov chain is, but the horse demonstrably has a theory of mind.

Rappaport
Oct 2, 2013

I was in a teacher seminar about six months ago, not sure exactly how long ago so maybe it was a super old chatgpt, and we did some live experiments with what it knew and what it didn't. I'm a STEM-lord, so my questions were "what is Isaac Newton famous for in physics", perfectly acceptable short answer, and "what did Isaac Newton do related to coinage" and the robot very confidently told us a story about how Newton used coins for physics demonstrations. The latter part may be true, but Newton worked for the Bank of England as head coin master for decades, trying to outsmart counterfeiters and the like. Obviously Newton is more famous for the apple and being a gigantic goony weirdo, but his career at the bank is relatively well documented and would not IMO be an obscure fact about his biography. I can't recall what the history teachers asked it, but it was kinda hit and miss too.

The ideal human teacher knows relatively well what their core competencies involve, at least with adult teaching. If the newer iterations of AI do actual sourcing and the like, it's certainly an improvement, but I would be a bit skeptical about just using it for independent study, especially for a new subject. It definitely has valid uses in education already, but I would look at it more like a robot that's good for doing ultimately pretty mindless work with an efficiency and speed a human being couldn't.

Rappaport
Oct 2, 2013

Yeah it was a couple of hours of workshopping. I'm 99% sure I asked my questions in that specific order, because I assumed the first one was a gimme and the second at least slightly more tricky. It's interesting to hear that I maybe deceived the poor robot into gibberish :ohdear:

Rappaport
Oct 2, 2013

sinky posted:

i'm 'Retat'



What the Hell was the prompt here? "Give me a rat with a giant dong umbilical cord and weird cell cluster globules"? :psylon:

Rappaport
Oct 2, 2013

mawarannahr posted:

I don't think this is true. It's not uncommon to switch languages in the middle of a sentence among members of a bilingual household. I'm pretty sure a lot of people think in multiple languages simultaneously, too.

I often find myself in situations where I can remember a word or a term in one language but not the other, in the middle of a sentence or writing something

Of course it could just be early(?) dementia :corsair:

Adbot
ADBOT LOVES YOU

Rappaport
Oct 2, 2013

Can we make a Bob Ross AI? Not like the digital ghouls Disney keeps conjuring up, just train an AI to chat soothing small nothings and making nice paintings on the user's screen. Maybe throw in some Mister Rogers for the terminally online doom-scrolling 4-year-olds, too.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply