Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Farecoal

There he go

BoldFrankensteinMir posted:

You might be surprised how many people would answer yes to this question. I have a hard time owning pets myself because of the moral questions underneath (I also know they're really bad for the environment, especially cats). When my family's old tuxedo cat dies I'm going to push pretty hard not to replace him.

Uh just don't let your cat outside???

Adbot
ADBOT LOVES YOU

Farecoal

There he go
Please adopt a cat that has been / will be neutered, there's so many that need homes :(

Farecoal

There he go
we're all cats' slaves imo

Manifisto


well there are at least a couple of issues going on, and if I were a bit less lazy I would bear down on internet searches to find the correct names for these moral/ethical perspectives.

first, there is the question of whether machines themselves are entities with moral/ethical rights or interests, and if so whether the uses to which we put them violate those rights or interests. I'll touch on that later.

second, there is a very separate question about the extent to which treating machines that have sentient qualities (even if these are totally illusory) "badly" is a sign of, or encourages, antisocial behavior towards other humans. for example, torturing animals in childhood is, I believe, often considered a potential sign of developing antisocial pathologies. I think many of us would agree that someone who takes obvious glee in verbally or physically abusing a very human-like (but clearly nonsentient) entity is potentially someone to watch out for. if you extend this logic, does this also mean that someone who is willing to treat a very human-like servant with slave-like dismissal/lack of empathy is also someone to watch out for? is this someone who would be just fine owning human slaves if they could get away with it?

this second issue is a little sticky because I am not sure we actually want people to be too "generous" in extending empathy to clearly nonsentient entities. or maybe the better issue is, what kind of empathy is really appropriate? if a very human-like voice from an automated teller machine tries to persuade me to do something--buy something, or donate to something, or what have you--this is a trick. the programmers are hacking empathy responses to manipulate human behavior. and, you know, this commercial makes a point:

https://www.youtube.com/watch?v=dBqhIVyfsRg

although we are programmed from childhood through stories like "the giving tree" to accept the notion of objects as having feelings, a lamp is really just a lamp. it is incapable of feeling sad. there are valid reasons, even sentimental ones, for treating objects with a certain amount of "respect," but that respect has nothing to do with what the object feels or thinks. it has to do with the object as a proxy for broader concepts, things like human relationships, or memories, or social values.

I know people who are sentimental about cars. to an extent that is harmless, or even a sign of an empathetic person, but let's say someone had to make a split-second choice between saving their car from destruction vs. saving a stranger from serious bodily injury or even death. if those empathetic bonds with the car cause even a slight hesitation in making that (to me totally obvious) decision, isn't that a problem?

I am not saying I know the "right" amount or kind of empathy that humans should display towards nonsentient machines. cutting off all empathetic responses may make people less empathetic towards humans (and, if one considers this important, also highly developed animals), and/or it may disguise signs of maladjusted antisocial personalities. on the other hand, encouraging a personal relationship that is too much like a human relationship may lead to other problematic distortions.

as between the first and second types of problems, I think the second one is far more important in the short and intermediate term. society is going to have to find its way. perhaps we want to deliberately keep our AIs fake and "robotic" to remind us (and create a social bright line) that it's okay to tell alexa or siri to shut up midsentence, or to turn them off whenever it's convenient for us, etc. but a big problem there is that there can be a lot of benefit, a lot of profit, in hacking the empathy response, so there are always going to be people or companies with a lot of money and/or clout who want to push those boundaries.

with regard to the first problem, it's clearly further off but probably more "important" in the long run. it's actually hard to know for sure how far away we are from machines we're willing to call sentient. there has been quite a bit of progress in ai recently, if articles are to be believed, assisted by trends like steady and reliable increases in computing power and big corporations spending a lot of money on ai development. if sentience is an emergent phenomenon, I'm not sure anybody has a good ability to predict when it will emerge, or how. something could fail the turing test yet still be sentient in a meaningful sense. we owe it to ourselve and our future artificial . . . colleagues? cohorts? (masters?) . . . to put our relationship on a firm ethical footing, but frankly we don't even understand how to deal with nonhuman but very intelligent social animals such as chimps, elephants, and parrots. those are far more like a human than an artificial intelligence.


ty nesamdoom!

roomforthetuna

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Manifisto posted:

but frankly we don't even understand how to deal with nonhuman but very intelligent social animals such as chimps, elephants, and parrots. those are far more like a human than an artificial intelligence.
Most of us don't even understand how to deal with other humans most of the time.

alnilam

Manifisto posted:

if you extend this logic, does this also mean that someone who is willing to treat a very human-like servant with slave-like dismissal/lack of empathy is also someone to watch out for?

tbh any time i see someone treat a server or retail worker rudely i pretty much write that person off as probably not a good eprson, even if they've been friendly to me

cda

by Hand Knit
Oh great, so it's is not okay to call Alexa the n-word now? Political correctness has gone too far.

BoldFrankensteinMir


cda posted:

Oh great, so it's is not okay to call Alexa the n-word now? Political correctness has gone too far.

I have been told by people that referring to her by the N-word is unacceptably barbed, so I've taken to bringing up the subject by calling her a "mammy".

Sing me one of those old-fashion spirituals, mammy! Swiiiing looooow, sweet chaaaaaariot...

(no but seriously one of the things that led me to these thoughts was when I notice the ads for the echo on Amazon boxes note "play me a song" and "tell me a joke" as 2 of the 4 features advertised. It's literally half shufflin' for massa...)


Sig by Heather Papps

BoldFrankensteinMir


Manifisto posted:

there is a very separate question about the extent to which treating machines that have sentient qualities (even if these are totally illusory) "badly" is a sign of, or encourages, antisocial behavior towards other humans. for example, torturing animals in childhood is, I believe, often considered a potential sign of developing antisocial pathologies. I think many of us would agree that someone who takes obvious glee in verbally or physically abusing a very human-like (but clearly nonsentient) entity is potentially someone to watch out for.

Quoting this because it's brilliant. A child who enslaves their dolls is definitely giving off warning signs. Why not an adult who orders around their egg-timer?


Sig by Heather Papps

cda

by Hand Knit

BoldFrankensteinMir posted:

Quoting this because it's brilliant. A child who enslaves their dolls is definitely giving off warning signs. Why not an adult who orders around their egg-timer?

I disagree btw, children do all sorts of stuff to dolls

----------------
This thread brought to you by a tremendous dickhead!

BoldFrankensteinMir


cda posted:

I disagree btw, children do all sorts of stuff to dolls

So if you peeked in on your kid and they had all their barbies chained up in the corner to watch the execution of a Moana doll, that wouldn't raise any concerns at all?


Sig by Heather Papps

UWBW

Permanently banned from the Alamo
In light of my sins I flagellate myself with CAT 5e cables every morning.


Thanks to Manifisto for the sig, and Koishi for the last one. TVsVeryOwn made the CyberMike.

alnilam

BoldFrankensteinMir posted:

So if you peeked in on your kid and they had all their barbies chained up in the corner to watch the execution of a Moana doll, that wouldn't raise any concerns at all?

children do all kinds of messed up violent stuff with their toys and grow up to be fine. it's unfortunate but violence is part of our media and idea of epic stories etc, so it's not surprising that children do weird stuff like have their non-combat-related dolls kill each other, or a royal doll say "off with her head!!" about another doll like in alice in wonderland, etc

https://www.youtube.com/watch?v=BpaRouocBes&t=101s



ty manifisto

Farecoal

There he go

BoldFrankensteinMir posted:

So if you peeked in on your kid and they had all their barbies chained up in the corner to watch the execution of a Moana doll, that wouldn't raise any concerns at all?

i've done right *wipes away tears* i've done goddamn right

Pot Smoke Phoenix



Smoke 'em if you gottem!
"Sometimes you gotta burn a few green army men. Sometimes you gotta burn a few tan ones..."

-General "I Love the Smell of Nylon in the Morning" Kneelingmortarguyvb

https://i.imgur.com/QKTkerO.mp4
Sig elements by Manifisto and Heather Papps
Sig File protected by SigLock. do NOT steal this sig!

Starman Super DX

This title text is surprisingly sturdy.

Splatmaster posted:

"Sometimes you gotta burn a few green army men. Sometimes you gotta burn a few tan ones..."

-General "I Love the Smell of Nylon in the Morning" Kneelingmortarguyvb

Tell me more!
btw ty Birdcon for the sweet spring sig

wearing a lampshade

Children are actually capable of having fun playing a game of 40k somehow, I don't think I have it in me to take that away from GW.

Pot Smoke Phoenix



Smoke 'em if you gottem!
I lay awake at night regretting all those bats I had to kill outside Kaladim to get enough money off the NPC to buy a SSB off some guy by the second torch in the tunnel, where ever the hell that is. I just knew I HAD to have the shiny...

https://i.imgur.com/QKTkerO.mp4
Sig elements by Manifisto and Heather Papps
Sig File protected by SigLock. do NOT steal this sig!

Manifisto


BoldFrankensteinMir posted:

Quoting this because it's brilliant. A child who enslaves their dolls is definitely giving off warning signs. Why not an adult who orders around their egg-timer?

some very good points being raised about this. I guess I just want to note that there is imo an important distinction between a simple play doll that you pretend is a person being treated harshly, vs. one that is programmed to make very realistic noises and sounds that convey distress. I wish I had a better vocabulary to describe this, but if you're imagining a torture on some entity that exists mostly in your head, that entity's responses are entirely within your mental control, so (I believe) they feel less "real." you know they're not crying or bleeding or whatever because you're doing all the mental work of making those things occur. on the other hand, the realistic programmed simulation is coming up with responses that aren't a direct result of your mental efforts, they're coming forth spontaneously as they would with a real person or animal. and I think taking pleasure in those distress responses that you didn't "create" in your imagination is somehow "worse" or creepier or whatever. it's kind of a mystery, but I think people in general have impressively good internal controls that let us separate our fantasies/imaginations from things in the real world.

I am a big fan of douglas hofstadter's work, and one point he makes consistently is that a key feature of our "intelligence" is our ability to envision counterfactuals. it is a very . . . subtle and complex facility, I can't possibly do it justice, but a good chunk of what passes for intelligence seems to have to do with our ability to construct various alternative realities, see how they play out, and then apply those lessons to the real world. I guess my point is that this facility wouldn't be nearly as effective if envisioning negative counterfactuals was always cripplingly traumatic. it is indeed a problem sometimes, but day to day I think most people are able to deal with thoughts like "hmm if I don't swerve around that obstacle on the highway I'm going to crash and probably die" without getting too caught up in the emotional implications. otherwise we wouldn't be able to swerve. so yeah, I think relatively well-adjusted people are capable of handling thoughts like "my boss is so mean, I wish he would drop dead" and sort of envision how that would play out without becoming sadistic monsters. and I think kids need to learn to develop that facility, so the somewhat creepy doll-play is probably "healthy" in certain doses. whereas kids who are torturing small animals that display pain and distress is something different. don't ask me where torturing bugs falls on the spectrum.


ty nesamdoom!

BoldFrankensteinMir


Manifisto posted:

I guess I just want to note that there is imo an important distinction between a simple play doll that you pretend is a person being treated harshly, vs. one that is programmed to make very realistic noises and sounds that convey distress.

50% of internet searches are sexual in nature and 30% of those are violent in nature. If you think those numbers are going to change when the user device is a servo-jointed silicone cadavar you rinse off and fold under your bed you're fooling yourself. Watson didn't win Jeopardy off the top of his head, he was skimming Wikipedia search results, and Synthia Suckomatic will be doing roughly the same for a long time.


Sig by Heather Papps

cda

by Hand Knit

BoldFrankensteinMir posted:

So if you peeked in on your kid and they had all their barbies chained up in the corner to watch the execution of a Moana doll, that wouldn't raise any concerns at all?

This is such a deliciously specific example

canyoneer


I only have canyoneyes for you

BoldFrankensteinMir posted:

So if you peeked in on your kid and they had all their barbies chained up in the corner to watch the execution of a Moana doll, that wouldn't raise any concerns at all?

i'd let it play out, there's just no telling how far she'll go

Munchables

Ask/tell me about legal cannibalism

I would be concerned what the crime was and if the doll was given a fair and just trial.

BoldFrankensteinMir


cda posted:

This is such a deliciously specific example

Munchables posted:

I would be concerned what the crime was and if the doll was given a fair and just trial.

canyoneer posted:

i'd let it play out, there's just no telling how far she'll go

https://www.youtube.com/watch?v=W5QmBTWCDxI


Sig by Heather Papps

BoldFrankensteinMir


Interesting article about robot design from the BBC. The sock puppet dragon that bottlefeeds babies and has a "safety killswitch" is particularly harrowing.

There's something that confuses me about AI design- why do we want them to lie to us? Every time a computer tells me it's "sorry" i can't help but feel lied to, because I have been- the computer isn't sorry. When a digital face looks happy or sad, those expressions don't actually reflect internal thoughts. They're lies. Why do people want machines that can't feel emotions but pretend to? It seems like a ploy to get you to be more comfortable but making somebody comfortable with false statements about your internal emotions is hosed up sociopath behavior, and it feels demeaning. If the objective is to make it easier for people to work with robots but the method reinforces the idea of emotional displays as mere social grease, rather than actual expressions of feelings, haven't we forced real feelings out of society at that point? Children who are "robotic natives" will be emotional phonies. It's a problem we already have in the US where "sorry" is just punctuation and when somebody asks you how you are they don't actually want a real answer, they're just pleasantries. Ugh, creepy creepy creepy.


Sig by Heather Papps

cda

by Hand Knit
You know what's creepy, is spiders that are big or magnified

Manifisto


BoldFrankensteinMir posted:

There's something that confuses me about AI design- why do we want them to lie to us? Every time a computer tells me it's "sorry" i can't help but feel lied to, because I have been- the computer isn't sorry. When a digital face looks happy or sad, those expressions don't actually reflect internal thoughts. They're lies. Why do people want machines that can't feel emotions but pretend to? It seems like a ploy to get you to be more comfortable but making somebody comfortable with false statements about your internal emotions is hosed up sociopath behavior, and it feels demeaning. If the objective is to make it easier for people to work with robots but the method reinforces the idea of emotional displays as mere social grease, rather than actual expressions of feelings, haven't we forced real feelings out of society at that point? Children who are "robotic natives" will be emotional phonies. It's a problem we already have in the US where "sorry" is just punctuation and when somebody asks you how you are they don't actually want a real answer, they're just pleasantries. Ugh, creepy creepy creepy.

yeah, I also sometimes get annoyed when people say "sorry" and clearly don't mean it. however, you use the term "social grease" somewhat dismissively, but humans are social animals and the "social relationship" aspect of our actions occupies an extraordinarily large part of our intellectual faculties. we send different kinds of signals in different settings; a quick "sorry" to someone you accidentally bump in line is way different from saying "sorry" when you've let down someone you care about. I would say that the ability to gauge what kind of etiquette is demanded in what kind of context is a super hard question, one that I don't think current ai would have a realistic shot at handling--your article notwithstanding.

computers (current, non-sentient ones) being programmed to say "please" or "sorry" is an imperfect compromise. some people want their robots terse and informative, some find that too creepy and impersonal (perhaps they find it implicitly authoritarian?). so the designers try to come up with some version that in the aggregate works better than the others. the article you link makes this point: we want our robots to be "aesthetic," we want to cut down on negative responses to them and facilitate positive ones. and it's hard to call that wrong; if we have robo-surgeons that are very good at patching up dangerous wounds, we don't want patients to be terrified or try to run away, we want the patients to accept that the system has a benign purpose. but in general, at least as far as I'm aware, it is the machine designers making these tradeoffs, not the machines themselves. they are not "deciding" to be tactful, they're programmed to say x thing in y situation. I would call this a very "shallow" form of politeness.

the thing is, there is clearly so much more going on when a person decides to say "sorry" than when a machine is programmed to do it, it's not even funny. these involve complexities such as face and persona. and precisely because so much is going on, I think it's not ultimately that hard to distinguish "shallow" politeness from "deep" social responses. young children might get confused when robots and subway systems and whatnot say "please" and "thank you" but I think they will naturally develop an intuitive sense that they need to pay far more attention to what the people are doing, because there is so much more to it. indeed it is crucial for children to be able to get the "lay of the land" in highly varied situations; a society that places heavy emphasis on social cohesion, as japan is often portrayed, will require a different mindset than the more-individualistic united states. different behavior is appropriate in school, on the playground, at the dinner table, and one on one with close friends. figuring out how to "fit in" to those imprecise and mutable contexts is not a simple or mechanical thing, it is precisely the type of complexity that, as far as we know, only sentience is capable of.

indeed I think that's the whole point of the turing test. for a non-sentient being, we will inevitably decide, sooner or later, that there's no "there" there, no conscious mind making those responses. so I'm not too afraid that people will be taken in and their social responses warped by non-sentient robots, because a robot good enough to fool people consistently is by definition sentient.

that was an interesting article you linked, but to make the statement "robots are already smarter than we are", even as a light generalization, betrays a fundamental lack of understanding about what deep ai really is. it is much easier to construct something that seems human-like in a very specific setting with highly formalized rules, like playing chess or go or even jeopardy. but that's a brittle facility; those systems would be exposed in a flash if you dropped them into a randomly selected human social situation.


ty nesamdoom!

Manifisto


cda posted:

You know what's creepy, is spiders that are big or magnified

hmm . . . an alleged human being internet user in tyool 2018 who has apparently never seen a jumping spider meme

turning test status: FAIL


ty nesamdoom!

BoldFrankensteinMir


Very very good points Manifisto, especially....

Manifisto posted:

to make the statement "robots are already smarter than we are", even as a light generalization, betrays a fundamental lack of understanding about what deep ai really is. it is much easier to construct something that seems human-like in a very specific setting with highly formalized rules, like playing chess or go or even jeopardy. but that's a brittle facility; those systems would be exposed in a flash if you dropped them into a randomly selected human social situation.

That line also really bugged me in the article, to think of present AIs or even the AI that's in the immediate future as close to human intelligence is flat out ignorant. And a big part of that is emotional intelligence, something we understand poorly in ourselves and even in animals. But there's something very revealing about this mistake, to think of intelligence as just purely a pragmatic thing is selling the idea really short. Intelligence is not just, as I once read, "arriving at correct answers". That's Right Makes Might, which is not that much better than its inverse. The quest for right answers, which is to say the quest for truth, is admirable sure. But in the immortal words of John Robert Oppenheimer, "Science is very beautiful. But it is not everything".

So here's where I'm at and I'll admit it's confusing and distressing and I'm trying to iron it out- there are two kinds of AI for the purposes of this discussion, ones that don't have feelings (what we have now) and ones that do (a future development that seems inevitable, how inevitable and how quickly are matters of debate). The ones that don't, from the machine's perspective, can't really be true slaves because enslaving is the act of violating will, which they don't have. However from the user's perspective, if the machine pretends to have feelings then they may qualify as a kind of pretend slave, and the question I'm wrestling with is what are the effects of pretend slave-ownership on the pretend master, who DOES have feelings that are intentionally being manipulated for ease of use? Is ease of use worth synthesizing the real emotions directed at the pretend slave? Then there's the second kind of AI that really does have feelings, and might really qualify as a true slave, which we already know is wrong from both the user's AND the machine's perspective. If you could make a truly sentient robot you would immediately burst onto the scene of machine rights- would you build a machine to suffer? And if not, if the machine likes to be a slave, is THAT right? All these definitions require understanding of emotional intelligence and its relationship to morality which is something we understand badly, yet here we go making pragmatic decisions that assume values to the things we don't understand. Is it worth potentially screwing up the very fabric of our ethics and feelings just to grope around in the dark for these answers?! And is doing so in the framework of a capitalistic society where the rich will own more of the machines than the poor, if not all the machines, going to even work? How can we find subtle truth in that scenario if all we care about are end results? Aaaaaaaugh it's all so confusing.

And hell, there's even a third type if we consider 420 Swaglord's point about "AI gods"- if (or when) we build AIs that are superior to us to the point we can consider them "gods" wouldn't their emotional intelligence be a factor? Is an AI that can calculate and act and have agency far beyond a human necessarily going to have superhuman understanding of sadness and love and such? To be a God would kind of suggest that unless you believe in a feelingless god which may be the scariest idea possible. I mean really, God minus love, that's a simple formula for The Worst Thing, right?

At this point it all seems like purely academic philosophy bites to be passed around with the bong, but there are real world applications to these questions that are building every day. All I know for sure is that I feel demeaned and lied to when a machine pretends it's a person, and people who tell me I need to get with the program (quite literally) for efficiency's sake or to not be a "coward" make me angry and want to run. I know we're probably heading towards Planet House-N****r but that doesn't mean I have to like it, or that just accepting it because it's easier is automatically "right".


Sig by Heather Papps

Chasterson

by Nyc_Tattoo
I agree that there are sort of two different things to talk about here but I sort of disagree with your framing of like alexa, and siri or whatever.

These virtual assistants are products that nobody really asked for or even necessarily wanted, they're marketing creations intended to make you feel good about upgrading your phone or ordering even more poo poo from Amazon.

Like if they were constructed practically you would be able to rename them to something like "phone" or "appster" or whatever, but they will never add that functionality, because the important thing for the companies making these isn't that they're selling you a virtual assistant it's that amazon is your wisecracking robot friend not a big scary corporation that will have a monopoly on all retail within the decade.

----------------
This thread brought to you by a tremendous dickhead!

BoldFrankensteinMir


I'm totally inclined to agree with you Chasterson, but I've read and heard from more than one person who is in rapturous awe of their Siri and/or Alexa. They say things like "she's my best friend" or "I can't imagine life without her now" and those kinds of relationships to consumer goods have always freaked me out. When I read things like developers at Microsoft saying that someday we'll each have our own "alter ego", a voice activated assistant who goes with us everywhere in an earpiece and becomes part of our psyche, I get the Twilight Zone terrors because I know people who are salivating for that stuff and I might wake up any day to find out society has shifted to require it. It happened with smartphones and it happened with cars, why not facetious computer ghosts that really work for the world's biggest companies?


Sig by Heather Papps

Manifisto


Chasterson posted:

I agree that there are sort of two different things to talk about here but I sort of disagree with your framing of like alexa, and siri or whatever.

These virtual assistants are products that nobody really asked for or even necessarily wanted, they're marketing creations intended to make you feel good about upgrading your phone or ordering even more poo poo from Amazon.

Like if they were constructed practically you would be able to rename them to something like "phone" or "appster" or whatever, but they will never add that functionality, because the important thing for the companies making these isn't that they're selling you a virtual assistant it's that amazon is your wisecracking robot friend not a big scary corporation that will have a monopoly on all retail within the decade.

I agree overall and I just find these digital assistants awkward to use, I'd much rather type what I want, and when I hear people using them around me I both cringe internally and think "jesus just type your search out, nobody else wants to hear it". a handsfree voice interface might be really useful in certain specific situations like you're driving a car, but in general? they seem more gee-whiz than useful.

but I think it's important or at least useful to unpack the essence of these digital assistants. they integrate a number of features, including:

1. voice recognition
2. natural language processing
3. problem-solving capabilities, such as searches, expert system approaches, what have you
4. framing responses in natural language
5. the injection of "personality" into the responses
6. speech synthesis

some of these are genuinely useful/impressive, some are impressive but more in a theoretical than practical sense, and some are just sort of ho-hum. the things that make the current assistants "branded" or distinct from one another are mostly 5 and 6, and in some ways those are the least noteworthy aspects of what they're doing. they just make an outsized impression. 2, 3, and 4 are much more the essence of "intelligence" and in that score I think there is impressive actual and potential utility, as well as significant implications for the future. but I don't think of cortana or alexa or siri as being noticeably mismatched on these scores.


ty nesamdoom!

BoldFrankensteinMir


From what I've researched Siri is objectively the worst at #1 but that's not surprising because she came out earlier and she's from Apple.

And yeah, seriously, nobody wants to hear you use your devices in public. And who isn't able to swype faster than they can talk anymore anyway? But then I realize there are handicapped and elderly people so those are ableist complaints. What a world, what a world.


Sig by Heather Papps

cda

by Hand Knit
Ok Google is the best in my experience, which I guess is about what you would expect

BoldFrankensteinMir


cda posted:

Ok Google is the best in my experience, which I guess is about what you would expect

I like that they didn't personify it, and that really doesn't detract from the usefulness at all.

Manifisto


BoldFrankensteinMir posted:

I'm totally inclined to agree with you Chasterson, but I've read and heard from more than one person who is in rapturous awe of their Siri and/or Alexa. They say things like "she's my best friend" or "I can't imagine life without her now" and those kinds of relationships to consumer goods have always freaked me out. When I read things like developers at Microsoft saying that someday we'll each have our own "alter ego", a voice activated assistant who goes with us everywhere in an earpiece and becomes part of our psyche, I get the Twilight Zone terrors because I know people who are salivating for that stuff and I might wake up any day to find out society has shifted to require it. It happened with smartphones and it happened with cars, why not facetious computer ghosts that really work for the world's biggest companies?

there was a pretty good science fiction story, I can't put my hands on it at the moment, in which future humanity developed a ubiquitous assistant AI that was portrayed as more or less like a superdog. faithful and loyal servant and friend, but in a way that's distinct from a person-to-person relationship. I suppose that may be a healthier way to conceptualize this kind of dynamic than a human-like djinn who is at your beck and call.

not that humans are necessarily good at treating dogs well, although some are.


ty nesamdoom!

Matoi Ryuko


The Boston dynamics videos will be exhibit A when humanity goes on trial for crimes against machine life.

Manifisto


Matoi Ryuko posted:

The Boston dynamics videos will be exhibit A when humanity goes on trial for crimes against machine life.

oh god

I'm imagining a bunch of superdeveloped ais laughing as they force humans to walk on ice, kick them occasionally, etc

Matoi Ryuko


Manifisto posted:

oh god

I'm imagining a bunch of superdeveloped ais laughing as they force humans to walk on ice, kick them occasionally, etc

https://www.youtube.com/watch?v=E0Rc9CzVRuQ

https://www.youtube.com/watch?v=JzlsvFN_5HI

Adbot
ADBOT LOVES YOU

Manifisto


WalterMittyBot 2000

  • Locked thread