Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
little munchkin
Aug 15, 2010

Lightning Lord posted:

Yes but I mean he never said "Time to kill all the non-whites and liberals" like alt-right types want.

Look up George Tiller.

Adbot
ADBOT LOVES YOU

Who What Now
Sep 10, 2006

by Azathoth

Eripsa posted:

I made another video ripe for intellectual mockery. Here's Made of Robots Episode 1: Robot Rights. Cheap yo!. Guest appearances by WrenP-Complete and Eripsabot. Comments and suggestions appreciated.

Eripsa, Im not quite finished with your video, but I just wanted to say while it's fresh in my mind that it seems disingenuous, or at least it does a disservice to your argument, to be unwilling to discuss the definition of rights in a discussion about rights.

Edit:

In point 4a when trying to lay down your case that robots are social participants you list that we treat them as such (in some cases) and do things such as give them awards. However, this is true of fictional characters as well. Kermit the Frog, for example, was awarded an honorary doctorate by the Southampton College. People also grieve when their favorite characters die in Game of Thrones or Walking Dead. Perhaps not to the degree of soldiers grieving the loss of a service robot, but grieving nonetheless.

Given this, do fictional characters deserve rights as well for reasons similar to robots? If not, why not?

Who What Now fucked around with this message at 14:47 on May 17, 2017

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!
Thanks for the feedback everyone!

Forer posted:

I liked your video, I just wish you covered https://www.nytimes.com/2015/06/18/technology/robotica-sony-aibo-robotic-dog-mortality.html?_r=0 this as well, (though it might have been too dense then)

Yesss! This stuff was definitely on my mind as I was compiling the video, but I decided that I made the "community integration" point well enough for this video. Density is a persistent problem for me. FWIW, my primary interest is in the impact of robotics on social dynamics, so I've be covering this ground a lot, and I have a whole spiel on robots and death ("being-towards-obsolescence" cf. Heidegger) where this case is more appropriate.

I wrote up a rough episode list a while back, but I don't remember if I shared it here.

quote:

Made of Robots episodes (planned):

Robot rights, cheap yo! Hitchbot, tweenbot, robot funerals & #botALLY
Robo-ethics: tunnel problem, FLI open letter, killer robots etc
The AI Winter
The Turing Test and Fair Play
The Chinese Room Argument Sucks
Dreyfus and What computers can’t do
Lovelace’s objection and Autonomy
Kasparov vs Deep Blue (or: when computers took over the world)
AlphaGo and Tensorflow
Descartes and the ghost in the machine
Aristotle and goal-oriented organization
Kant and the organization of self-consciousness
Biological and technological autonomy
The Singularity Eschatology
The Myth of AGI and Friendliness
Simulation hypothesis and the Eternal Recurrence of Bad Philosophy
Sexbots and the objectification of objects
Watson, Hype, and the next AI winter
Real Robot Movies
Integration and the Hard Problem of Consciousness

I think I want to do Turing's test next, since it clarifies the issue of social participation I started developing in this first video.

WrenP-Complete posted:

Eripsa, I just watched your new video. Much smoother than your last ones. Good job talking slower. I like the style of you speaking over the video moving!

I don't quite understand what it would mean for robots to be a protected class by virtue of them existing in a social space. Does that mean that say, an automated soldering arm at a factory would or would not have rights? Must the "social space" consist of a mixed group of robots and humans?

Thank you! You didn't help make the video, but you've certainly inspired me to take this work seriously. Eripsabot has helped me appreciate just how much I have to contribute to this domain. You deserve a lot of credit here =)

My claim is not just about "existing" in a social space. The composition of the space doesn't matter as much as its dynamics. My claim is about participation as an interactive process. As I said, this will all become more clear as I go through Turing's test and the classical AI debate; my core view is that thinking(/intelligence/cognition/identity/self-hood) is fundamentally an interactive process between agents. This is literally true at all scales (protons are interactions of quarks and gluons, etc), but it is specifically true of social agency in terms of agent detection, theory of mind, and social cognition generally.

So a robot is a social participant when it acts as a social participant. This implicates our attitudes towards the robots, but also the robot's activities itself. A robot arm in a factor is unlikely to be engaged in sufficiently interactive social exchanges to be recognized and treated as "social" by any other participant. But a delivery bot programmed to engage in light banter with people on the sidewalk might.


Who What Now posted:

Eripsa, Im not quite finished with your video, but I just wanted to say while it's fresh in my mind that it seems disingenuous, or at least it does a disservice to your argument, to be unwilling to discuss the definition of rights in a discussion about rights.

Edit:

In point 4a when trying to lay down your case that robots are social participants you list that we treat them as such (in some cases) and do things such as give them awards. However, this is true of fictional characters as well. Kermit the Frog, for example, was awarded an honorary doctorate by the Southampton College. People also grieve when their favorite characters die in Game of Thrones or Walking Dead. Perhaps not to the degree of soldiers grieving the loss of a service robot, but grieving nonetheless.

Given this, do fictional characters deserve rights as well for reasons similar to robots? If not, why not?

Please, argue about rights all you want. I'm just saying that my argument doesn't hinge on any special definition of rights, and clarifying that definition is not the fight I'm having here. I'm fighting over the claim that (some) robots are social participants deserving some official recognition and protections. I'm using the term "rights" just to mean "official recognition and protections", and I'm not saying anything stronger about how they work, why we have them etc.

Your second question is good, and goes back to my points about participation as an interactive process above. The technical explanation can be put in terms of a defense of Enactivism as a theory of agency, and this is the foundation of my defense of Turing's test (which will be the next video).

But the tl;dr is that interaction is a two-way street. It's not just about how we treat the robots; it's also how the robots support and reinforce that treatment, and how this mutual dynamic shapes both participants. The problem with most artificial agents today is that they can only support this dance for a couple of moves before exhausting their complexity. I can grieve over a GoT character or award Kermit a PhD, but literally nothing else about that relationship resembles what it imitates. You can't expect Kermit to provide the same insight into their domain as you could a phd, etc.

The cases I cover in the video are all of bots that can, at least in a minimal sense, keep up their side of the relationship. Humans are so abundantly social that it doesn't take much; hitchBOT didn't require much more than a twitter account for PR. But this is still an active contribution on the part of the machine for organizing the social forces available to carry it along to its goal. Hitchbot's agency is closer to a virus than a fully self-organizing cell, but just as a virus is undeniably a biological agent, hitchBOT is also undeniably a social agent.

And if it's a social agent, then it gets taken up by the same frameworks of social consideration we impose on all the other agents in the world (humans, animals, corporations, states, etc). And this is all it takes for the vocabulary of rights to come into play. At least, that's my argument. In the next video I'll make explicit that this argument isn't just mine, but is actually what motivates Turing's so-called "test". This is something that I'm not sure anyone in the AI community fully appreciates.

These were all great questions thanks!

Eripsa fucked around with this message at 16:21 on May 17, 2017

Feinne
Oct 9, 2007

When you fall, get right back up again.
Being fair to Kermit if you had a PhD voicing him then yes, he could provide that level of insight. The only difference between him and a robot programmed for banter is that a human is actively providing the vocal output instead of doing it ahead of time.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

Treating the interactions of particle physics as "thinking" on behalf of the particles is incredibly stupid at face value, and an unhelpful anthropomorphization at best.

Who What Now
Sep 10, 2006

by Azathoth
I would argue that robots, being currently incapable of sentience, cannot be meaningfully discriminated against or oppressed. Their owners could be, but not the robots themselves any more than a hammer, a calculator, or any other tool can be. Thus, while they deserve to not be defaced or destroyed, they cannot be considered a protected class. You might be better served by saying they need certain protections inspired by or similar to those afforded by Protected Class status.

I also don't agree with you that robots currently "participate" in society, but rather humans participate at them. The robots contributions are primarily post-hoc rationalizations ascribing meaning to actions that are preprogrammed, random or semi-random, controlled by a human, or a combination of any/all of those things. In every case the robot has no real control over it's supposed contributions.

Currently under your definitions as I understand them a drinking bird toy would be deserving of protected class rights because I can talk to it and it clearly communicates back to me by nodding yes. I can even ask it point blank "Are you a sentient, sapient, fully cognizant person deserving of rights" and it'll say yes! But I think even you can see that would be absurd.


Basically, I think you're putting the cart before the horse here. It's possible, I even believe somewhat likely, that in the future there will be AIs that will deserve to be considered to be people with all the rights and protections that entails. And youre right (oh god, i just threw up in my mouth a little saying that) when you say it's important that we discuss the subject now so we can be ready if it happens. But currently robots are objects, property owned by their owners, and deserving of the protections that property has and not the rights that people have, no matter how attached we might be to them.

Forer
Jan 18, 2010

"How do I get rid of these nasty roaches?!"

Easy, just burn your house down.

Eripsa posted:

Good stuff


So I'm dumb and you might be covering this, but is the ethics of neural network simulations such as openworm covered under the descartes segment?

Midig
Apr 6, 2016

Not this poo poo again. Thread will be gassed if you don't talk poo poo about "counter arguments".

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Feinne posted:

Being fair to Kermit if you had a PhD voicing him then yes, he could provide that level of insight. The only difference between him and a robot programmed for banter is that a human is actively providing the vocal output instead of doing it ahead of time.

One feature of a PhD is that they don't depend on the presence of another PhD in order to present PhD level expertise. On these grounds, Kermit fails this simple test.

Captain Foo posted:

Treating the interactions of particle physics as "thinking" on behalf of the particles is incredibly stupid at face value, and an unhelpful anthropomorphization at best.

The identity of particles is a product of their interactions with other particles. This isn't "thinking", but it is an individuation procedure. My point is the same individuation procedure operates at all scales. So particles are distinguished by their interactions, and agents are also distinguished by their interactions.

I'm basically an eliminativist about "thinking".

Who What Now posted:

I would argue that robots, being currently incapable of sentience, cannot be meaningfully discriminated against or oppressed. Their owners could be, but not the robots themselves any more than a hammer, a calculator, or any other tool can be. Thus, while they deserve to not be defaced or destroyed, they cannot be considered a protected class. You might be better served by saying they need certain protections inspired by or similar to those afforded by Protected Class status.

I also don't agree with you that robots currently "participate" in society, but rather humans participate at them. The robots contributions are primarily post-hoc rationalizations ascribing meaning to actions that are preprogrammed, random or semi-random, controlled by a human, or a combination of any/all of those things. In every case the robot has no real control over it's supposed contributions.

Currently under your definitions as I understand them a drinking bird toy would be deserving of protected class rights because I can talk to it and it clearly communicates back to me by nodding yes. I can even ask it point blank "Are you a sentient, sapient, fully cognizant person deserving of rights" and it'll say yes! But I think even you can see that would be absurd.


Basically, I think you're putting the cart before the horse here. It's possible, I even believe somewhat likely, that in the future there will be AIs that will deserve to be considered to be people with all the rights and protections that entails. And youre right (oh god, i just threw up in my mouth a little saying that) when you say it's important that we discuss the subject now so we can be ready if it happens. But currently robots are objects, property owned by their owners, and deserving of the protections that property has and not the rights that people have, no matter how attached we might be to them.

1) The idea that we should individuate systems by their owners is disgusting capitalist slop.
2) Participation is independent of "ownership". Historically, humans have been just fine owning human beings that were undoubtedly social participants.
3) The suggestion that facts about ownership can settle facts about agency is basically poo poo, and anyone who defends it is poo poo.
4) Your drinking bird example is very shallow. It responds identically to all stimulus, so there's no reason to take any particular response as meaningful. I'm not just talking about offering a specific behavioral response, I'm talking about supporting an extended, versatile exchange between active participants. Post-hoc rationalizations will carry an exchange only so far. Hitchbot was able to support that exchange clear across Canada. That doesn't make it sentient, but it absolutely makes it different than the Drinking Bird.
5) Part of my position is that internal states aren't what drives the social dynamics. It is participation between agents. Not just behaviors in the void, but mutually reinforcing dynamical patterns of behavior. The complex matrix on which community and culture thrive.
6) Again, this is what Turing was talking about with the imitation game: talk with a machine, see how well it carries the conversation. That's not just about what we think about the machine, but also about what the machine licenses us to think about it. Drinking bird doesn't license very much, but Boomer probably does license a funeral.

Who What Now
Sep 10, 2006

by Azathoth

Midig posted:

Not this poo poo again. Thread will be gassed if you don't talk poo poo about "counter arguments".

What are you whining about?

Who What Now fucked around with this message at 17:43 on May 17, 2017

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Forer posted:

So I'm dumb and you might be covering this, but is the ethics of neural network simulations such as openworm covered under the descartes segment?

Descartes is basically irrelevant to any modern discussion of science, psychology, or the mind. But Descartes and Plato are basically the only philosophers anyone who isn't a professional philosopher has exposure to, so we try to interpret everything through the Matrix. Our current obsession with the simulation hypothesis is largely a product of the fact that no one knows philosophy past the 16th century. Sad!

Openworm (which is a standard example I use in class https://www.youtube.com/watch?v=YWQnzylhgHc) gets discussed in the context of Aristotle and Kant on teleology (goal-orientation). The question is: what does it mean to be an organism ("self-organized", "alive"). This issue is fundamentally about understanding the way our self and identity emerges from our biological constitution. This also gets into questions of what it means to be "programmed", the nature and function of "machines" and the relationship between bots and their code. This is probably where my philosophical footing is strongest, and where the relationship to traditional AI debates is most esoteric, so it will probably come later in the series, after I've better established a voice and flow. But this is absolutely on the menu.

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Who What Now posted:

I would argue that robots, being currently incapable of sentience, cannot be meaningfully discriminated against or oppressed. Their owners could be, but not the robots themselves any more than a hammer, a calculator, or any other tool can be. Thus, while they deserve to not be defaced or destroyed, they cannot be considered a protected class. You might be better served by saying they need certain protections inspired by or similar to those afforded by Protected Class status.

I also don't agree with you that robots currently "participate" in society, but rather humans participate at them. The robots contributions are primarily post-hoc rationalizations ascribing meaning to actions that are preprogrammed, random or semi-random, controlled by a human, or a combination of any/all of those things. In every case the robot has no real control over it's supposed contributions.

Currently under your definitions as I understand them a drinking bird toy would be deserving of protected class rights because I can talk to it and it clearly communicates back to me by nodding yes. I can even ask it point blank "Are you a sentient, sapient, fully cognizant person deserving of rights" and it'll say yes! But I think even you can see that would be absurd.


Basically, I think you're putting the cart before the horse here. It's possible, I even believe somewhat likely, that in the future there will be AIs that will deserve to be considered to be people with all the rights and protections that entails. And youre right (oh god, i just threw up in my mouth a little saying that) when you say it's important that we discuss the subject now so we can be ready if it happens. But currently robots are objects, property owned by their owners, and deserving of the protections that property has and not the rights that people have, no matter how attached we might be to them.

It's funny how the term "robot" is so loose here. My dishwasher could count as a robot. I have a robotic vacuum that only knows where it is based on infrared sensors and whether or not it sees a barrier. My coffee machine can be told to make me coffee at 6:45 AM so I am able to drink some before I head to work. All of these things count as robots and I wouldn't afford them any level of rights because they're incapable of doing anything beyond handling a set of instructions that cannot be questioned or changed.

We afford animals some aspect of rights because they're creatures that can feel even though they have no level of sapience. They're also necessary to the balance of our ecosystem and we recognise that even though we're doing a poo poo job taking care of everything around them.

Robots are a human construct that do not have the ability to feel things and are at this moment in time incapable of sapience--and it's debatable whether or not we're able to achieve this. They're not necessary to the balance of our ecosystem and they exist to take care of labour that we otherwise do not need to engage in.

Eripsa needs to come back down to reality.

WampaLord
Jan 14, 2010

OSI bean dip posted:

Robots are a human construct that do not have the ability to feel things and are at this moment in time incapable of sapience

I don't even see how a sane person could disagree with this. Jesus Christ, Eripsa, can we worry about the actual problems that real people have instead of hypothetical robot rights?

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

WampaLord posted:

I don't even see how a sane person could disagree with this. Jesus Christ, Eripsa, can we worry about the actual problems that real people have instead of hypothetical robot rights?

You have to remember that when Eripsa was wanking about the Polytopolis that he pretty much said that he wanted to tell minorities and low-income persons to butt out. He doesn't give a poo poo about social problems but instead his make-believe ones.

Ytlaya
Nov 13, 2005

khwarezm posted:

I don't think there's a sharp divide between anti SJW and older reactionary stuff. It may be less religious, but to be honest it really, really reminds of the 'Political Correctness Gone Mad (PCGM)' that you often hear from older people. I know my dad is a lot like that, he isn't at all religious (to an unusual extent given his age group and the fact that he's Irish) he doesn't use the terms like SJW but a lot of the underlying ideology is almost exactly the same, such distrust of Muslims because they refuse to integrate and their beliefs are intrinsically violent.

Another thing, I've spent most of my life in Britain and Ireland, and I think that conservatism in Britain is a lot less religiously charged than it is stateside. I think that the current anti SJW reactionaries that have caught on in America who aren't very religious like you say are very similar to British conservatives old and new, and its no coincidence that a lot of popular figures like Tunderf00t and Sargon are British too.

Oh, the ideas are certainly the same as "POLITICAL CORRECTNESS GONE WILD!!!" people of the past, but I'm more wondering about the motivations and what is causing this stuff to become more popular with young people specifically. With old people it's largely a "back in my day it was okay to make racist jokes!" thing, but that aspect doesn't really exist for the college-aged kids who are getting into this stuff now.

My post was mainly wondering if increased positive coverage of social justice related topics in the mainstream media has made it a target for the same sort of people who would have ranted about FOX news back during the Bush years.

Who What Now
Sep 10, 2006

by Azathoth

Eripsa posted:

1) The idea that we should individuate systems by their owners is disgusting capitalist slop.
2) Participation is independent of "ownership". Historically, humans have been just fine owning human beings that were undoubtedly social participants.
3) The suggestion that facts about ownership can settle facts about agency is basically poo poo, and anyone who defends it is poo poo.

Wow, did you ever miss the point of my post. And then needlessly attacked me for no good reason. Is this what I get for giving you the benefit of the doubt? Have we descended into you lashing out at people trying to give real criticisms already? I hope not, because unlike your word salads about building a new Facebook you actually have some stuff worth saying about this subject.

First, I'd like to point out that at no point did I imply something that is owned has anything to do with whether or not it has agency, and I want you to quote what parts you believe did imply that so I can explain to you what I actually meant. As it stands, these look to be baseless attacks for no reason other than to denigrate my character, and I don't think you want to go down that route.

quote:

4) Your drinking bird example is very shallow. It responds identically to all stimulus, so there's no reason to take any particular response as meaningful. I'm not just talking about offering a specific behavioral response, I'm talking about supporting an extended, versatile exchange between active participants. Post-hoc rationalizations will carry an exchange only so far. Hitchbot was able to support that exchange clear across Canada. That doesn't make it sentient, but it absolutely makes it different than the Drinking Bird.

I know it was shallow, it was shallow by design to point out a potential flaw in your justifications. My interaction with drinking bird (who I will refer to as Dirb) can range over a huge range of topics from philosophy, to politics, to romance, and so much more as long as I stick to yes or no question. True, it makes things harder on my part, but I can still get quite a variety of responses out of him if I word by questions correctly. Why, just the other day I asked Dirb if there was a special someone in his life, and he said yes. I even asked if it was ~true love~ and he said that it was! He's also a great listener, always nodding along to show that he's being attentive to what I'm saying. Eripsa, how can you be so cruel as to deny the very real emotional bond I have with Dirb, a being capable of true love?

The point being, the difference between Dirb and Ken Eripsabot is exactly how much work is required to maintain the participation between us. Dirb (arguably) requires a lot more work on my part in order to phrase my interactions with him in a way that will allow him to respond in a manner that makes sense, while Ken can take a much wider variety of prompts, although he can still produce responses that don't make any sense and requires Wren to control his responses to a degree. But the fact remains that they aren't actually responding in a way we can reasonably call "participating" because they aren't actually doing anything on their own. Gravity controls Dirb and an algorithm controls Ken. So if you're going to make allowances for one and not the other then you need to draw the line in the sand to show exactly how sophisticated a machine has to be before it goes from being an object to being a real participant. Currently my line is a lot further than yours is.

As an aside Dirb could kick Ken's rear end because Dirb can hit the Delete key and Ken couldn't do anything about it. Dirb: 1, Ken: 0.

quote:

5) Part of my position is that internal states aren't what drives the social dynamics. It is participation between agents. Not just behaviors in the void, but mutually reinforcing dynamical patterns of behavior. The complex matrix on which community and culture thrive.

And my position is that currently robots aren't actually agents capable of participation, we just like to pretend that they are. And, unlike human slaves, they can't be shown to be able to do so.

quote:

6) Again, this is what Turing was talking about with the imitation game: talk with a machine, see how well it carries the conversation. That's not just about what we think about the machine, but also about what the machine licenses us to think about it. Drinking bird doesn't license very much, but Boomer probably does license a funeral.

I already described how Dirb can carry out deep, meaningful, even soul-touching conversations - as long as I plan out my side of thing carefully enough. And in my opinion, Dirb deserves a funeral too, because he is real, and strong, and he's my friend.

Lightning Lord
Feb 21, 2013

$200 a day, plus expenses

Ytlaya posted:

Oh, the ideas are certainly the same as "POLITICAL CORRECTNESS GONE WILD!!!" people of the past, but I'm more wondering about the motivations and what is causing this stuff to become more popular with young people specifically. With old people it's largely a "back in my day it was okay to make racist jokes!" thing, but that aspect doesn't really exist for the college-aged kids who are getting into this stuff now.

My post was mainly wondering if increased positive coverage of social justice related topics in the mainstream media has made it a target for the same sort of people who would have ranted about FOX news back during the Bush years.

Yep, in some circles it's seen as anti-establishment, edgy and cool. They're "asking questions that the mainstream is afraid to answer". A lot of these kids would have stickers of Bush as a monkey on their trapper keepers if they were around in like 2003

But at the same time it's still pretty uncool IRL to go around and be a dick to girls because feminism is evil and poo poo like that. It's this weird online/offline dichotomy.

Discendo Vox
Mar 21, 2013

We don't need to have that dialogue because it's obvious, trivial, and has already been had a thousand times.
Oh good, eripsa's seeking attention by jacking threads again.

WampaLord
Jan 14, 2010

OSI bean dip posted:

You have to remember that when Eripsa was wanking about the Polytopolis that he pretty much said that he wanted to tell minorities and low-income persons to butt out. He doesn't give a poo poo about social problems but instead his make-believe ones.

At least when he wanking about that it was about actual people, not programmable objects that he thinks deserve some sort of respect.

boner confessor
Apr 25, 2013

by R. Guyovich

OSI bean dip posted:

You have to remember that when Eripsa was wanking about the Polytopolis that he pretty much said that he wanted to tell minorities and low-income persons to butt out. He doesn't give a poo poo about social problems but instead his make-believe ones.

it's much easier to solve imaginary problems because you can never be proven wrong. this is the most important thing to avoid when you're setting yourself up as a flawless visionary instead of someone who does real work that matters in the real world

Who What Now
Sep 10, 2006

by Azathoth

WampaLord posted:

At least when he wanking about that it was about actual people, not programmable objects that he thinks deserve some sort of respect.

I mean, you shouldn't break other people's stuff. But you also don't need to treat an inanimate object as if itself was a person with rights, either.

Ytlaya
Nov 13, 2005


At best, it certainly doesn't make sense to afford robots any more rights than we afford simple animals, like mice (and we certainly don't give mice or most other non-pet animals funerals under most circumstances). I can't think of any argument where it makes sense to afford robots (assuming current and near-future technology, at least) more rights than we afford random animals. If anything, the interaction between humans and many animals is more "real" than the interaction between humans and robots.

The problem with the "if robots can interact with humans, that means they're having a meaningful relationship" logic is that humans can interact with a wide variety of non-living objects and form emotional connections with them. Just like it's okay (if not a bit strange) for a person to have a funeral for their car, it's technically okay to have one for a robot, but it is absurd to think that it should be expected. If we give funerals for robots, we may as well give them for any animal that has interacted with humans as well.

Basically, current robots are more or less just more complex versions of writing a "if a person says X, make the program say Y" program. The code can be written in ways that allow them to learn from their interactions, but ultimately they're still following a set of rules dictated by code written by humans (even if those rules allow new rules to dynamically be added/changed). You could say the same for living organisms, but the level of complexity is dramatically different and you have to draw the line somewhere (since most people would obviously agree that we should give other humans rights) and current robots are far closer to simple programs than they are actual life.

Lampsacus
Oct 21, 2008

Do you just want robots to have rights because it'd be cool? Because I feel that's the drive behind all of this.

boner confessor
Apr 25, 2013

by R. Guyovich

Lampsacus posted:

Do you just want robots to have rights because it'd be cool? Because I feel that's the drive behind all of this.

it's very important to eripsa that he be hailed as a groundbreaking philosopher and towering intellect but all the questions that matter are already swarmed and picked over by more capable intellectuals so he has to set out for the frontier so to speak and seek his fortune in rocky conceptual ground like "is my dishwasher a slave?"

ZenMasterBullshit
Nov 2, 2011

Restaurant de Nouvelles "À Table" Proudly Presents:
A Climactic Encounter Ending on 1 Negate and a Dream

OSI bean dip posted:

You have to remember that when Eripsa was wanking about the Polytopolis that he pretty much said that he wanted to tell minorities and low-income persons to butt out. He doesn't give a poo poo about social problems but instead his make-believe ones.


I'm really glad after saying he wanted to improve Eripsa decided to stick to his guns and focus entirely on the make believe "What would do if robots had anything even close to sapience." aspect of technological growth and the proliferation of automated systems throughout production instead of any of the actual real social and economic issues that spring from this that heavily affect actual real sapient and sentient beings (because they're poor and thus not as relatable to Eripsa as his fantasy robot future dreams).

No seriously, snide comments aside Eripsa, the fact that you can see all the issues rising up from the increasing amount and complexity of machines and choose to ignore any of the real actual pressing issues affecting living beings and instead focus on weird vague futuristic flights of fancy paints both you and your arguments in a really Callous light, especially given everything about your previous few brain children that had similar problems.

Your argument, in that video and in all your previous 'works of genius' can be boiled down to "The internet and robots and future tech will fix society.....somehow because all those things are good and I like them so we need to base everything on them in these vague terms I've set up." I get you like your tech, that's neat, but gently caress off with this garbage where you ignore actual active issues because you yourself haven't experienced the struggle. You just come across as every techbro libertarian stereotype.

Also you should probably make a thread for your robot rights movement cause it'll take over this thread otherwise.

ZenMasterBullshit fucked around with this message at 22:13 on May 17, 2017

Who What Now
Sep 10, 2006

by Azathoth

boner confessor posted:

"is my dishwasher a slave?"

*cracks bullwhip against a refrigerator*

"Your name is Whirlpool!"

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

ZenMasterBullshit posted:

Also you should probably make a thread for your robot rights movement cause it'll take over this thread otherwise.

Please. I want to discuss the merits of dealing with the hopes and dreams of my Samsung washer and dryer.

OwlFancier
Aug 22, 2013

If you think about it, my computer having a driver problem is basically drapetomania.

Basically.

Midig
Apr 6, 2016

Jordan Peterson smells a bit like MRA/TRP. Dragon, gold, virgins, alphas. Professor at work here:

https://m.youtube.com/watch?v=a7Hez7iYAHo

Midig fucked around with this message at 00:07 on May 18, 2017

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

OSI bean dip posted:

Robots are a human construct that do not have the ability to feel things and are at this moment in time incapable of sapience--and it's debatable whether or not we're able to achieve this. They're not necessary to the balance of our ecosystem and they exist to take care of labour that we otherwise do not need to engage in.

Eripsa needs to come back down to reality.

A thing's impact is not a function of who created it, what its creators intended, or whether it is sentient or sapient. A thing's impact is entirely a product of what it does in the world. The introduction of the automobile had an enormous impact on the economic and technological infrastructure on the planet, and that has nothing to do with whether cars can feel.

The invention of cars also demanded an entire legal and political structure to manage. Cars don't have "rights" like people, but they are legally recognized entities with elaborate policy designed around them. The US has only developed policy addressing autonomous machines in the last 2 or 3 years. These are issues that are starting to become pressing political issues, and the conceptual space is unsettled (and replete with ignorance and confusions). This is a domain ripe for theorycrafting.

I'm not saying we owe robots free speech or political asylum, or we should make their destruction a hate crime. All I'm saying is we should recognize the status of (at least some) robots as social participants whose engagement is recognized and protected by the social order. I'm especially thinking of the bullying of service robots. I think some robots should have a protected status to allow their operation in public spaces, and to make it a special offense to attack or disrupt its operation. The way it's a special offense to attack a bus driver or mail man. Not because these people are better than others, but because they play a special role that deserves recognition.

Who What Now posted:

I know it was shallow, it was shallow by design to point out a potential flaw in your justifications... Gravity controls Dirb and an algorithm controls Ken. So if you're going to make allowances for one and not the other then you need to draw the line in the sand to show exactly how sophisticated a machine has to be before it goes from being an object to being a real participant. Currently my line is a lot further than yours is.

Yes, my line for "participation" is in the basement, and quite easy to jump over. On my view machines have qualified as unrecognized participants for quite some time. I justify this by appeal to Turing's test, which is also a basement-level test, and people don't like it very much. I'm hoping to bring it back =)

I don't think there's a line to draw; part of the point of my examples is that "minimal agency" is not an absolute threshold, but a function of how supportive the social system is. So an organism that thrives in one environment might fail in the next, and that doesn't really say anything about the organism's inherent autonomy. The same point applies to machines.

quote:

I already described how Dirb can carry out deep, meaningful, even soul-touching conversations - as long as I plan out my side of thing carefully enough. And in my opinion, Dirb deserves a funeral too, because he is real, and strong, and he's my friend.

That fact that you have to plan your end out carefully makes the exchange qualitatively unlike an actual conversation. That's just not how conversations work.

quote:

And my position is that currently robots aren't actually agents capable of participation, we just like to pretend that they are.

So my argument in the video distinguishes two questions: 1) is the robot a social participant? and 2) if so, does that participation warrant protections? Someone can deny 2 without denying 1, but you seem to be denying 1 as well. That's a much stronger claim, and I think it's also the much weaker position. I've hinted at my alternatives above, but it will ultimately have to come out through future videos. FWIW, I think this misunderstanding of the dynamics of agency is the biggest misunderstanding people have, both about the mind and about AI. It's kind of my goal to overturn this dogma.

Ytlaya posted:

At best, it certainly doesn't make sense to afford robots any more rights than we afford simple animals, like mice (and we certainly don't give mice or most other non-pet animals funerals under most circumstances).

Honestly it would be nice if we afforded animals some rights in the first place. No robot today deserves rights comparable to animals today. But the reason robots might be a more pressing concern is that robots operate within exclusively human social spaces. We don't have to worry about cows becoming stock traders, but robotraders are already a thing requiring legislation. Not because the AI is incredibly sophisticated, but because the systems it is interacting with are incredibly important.


ZenMasterBullshit posted:

Your argument, in that video and in all your previous 'works of genius' can be boiled down to "The internet and robots and future tech will fix society.....somehow because all those things are good and I like them so we need to base everything on them in these vague terms I've set up." I get you like your tech, that's neat, but gently caress off with this garbage where you ignore actual active issues because you yourself haven't experienced the struggle. You just come across as every techbro libertarian stereotype.

Yeah the aside about victim blaming is textbook libertarian techbro :rolleyes:

Also:



By the way, since I learned Premiere I compiled 2.5 minutes of every time Boston Dynamics abused a robot: https://www.youtube.com/watch?v=4PaTWufUqqU

Lain Iwakura
Aug 5, 2004

The body exists only to verify one's own existence.

Taco Defender

Eripsa posted:

A thing's impact is not a function of who created it, what its creators intended, or whether it is sentient or sapient. A thing's impact is entirely a product of what it does in the world. The introduction of the automobile had an enormous impact on the economic and technological infrastructure on the planet, and that has nothing to do with whether cars can feel.

The invention of cars also demanded an entire legal and political structure to manage. Cars don't have "rights" like people, but they are legally recognized entities with elaborate policy designed around them. The US has only developed policy addressing autonomous machines in the last 2 or 3 years. These are issues that are starting to become pressing political issues, and the conceptual space is unsettled (and replete with ignorance and confusions). This is a domain ripe for theorycrafting.

I'm not saying we owe robots free speech or political asylum, or we should make their destruction a hate crime. All I'm saying is we should recognize the status of (at least some) robots as social participants whose engagement is recognized and protected by the social order. I'm especially thinking of the bullying of service robots. I think some robots should have a protected status to allow their operation in public spaces, and to make it a special offense to attack or disrupt its operation. The way it's a special offense to attack a bus driver or mail man. Not because these people are better than others, but because they play a special role that deserves recognition.

You're neglecting to understand something here. Yes. We create rules around new inventions. When computer crime became more commonplace, there were very few laws that one could be prosecuted under in the event of unauthorised access. Early computer crime cases in Canada were handled as electricity theft until the Criminal Code could catch up. The Americans had to create the Computer Fraud and Abuse Act (CFAA) because of an inability to prosecute anyone.

In the case of computer crime, we're talking about humans taking tools and then abusing them. The same problem with automobiles is why laws were created around them.

Robots on the other hand are not social participants. Humans have programmed them to do a task and they're performing that task as directed. They're not calling the shots; the shots are given to them to call.

What you're talking about here is improving vandalism laws--and it's likely that publicly-roaming robots are already covered. It's illegal to go into a public park and set a trash can afire. Why not approach it that way without going into some bizarre academic approach where we start to consider the value that robots have from a human rights-like perspective?

WampaLord
Jan 14, 2010

Eripsa posted:

The invention of cars also demanded an entire legal and political structure to manage. Cars don't have "rights" like people, but they are legally recognized entities with elaborate policy designed around them.

No, Eripsa, we created policy to deal with drivers, not cars. Getting a license is something the human driving the car does, not something we award the car.

Who What Now
Sep 10, 2006

by Azathoth
Eripsa, are you going to acknowledge your blatant misrepresentation of what I said, or are you going to ignore that that ever happened? Because I don't see much point in treating you with respect if you aren't going to do the same.

Edit:

How can you gently caress up a meme? It's literally the easiest thing to make. :psyboom:

Who What Now fucked around with this message at 01:14 on May 18, 2017

Low Desert Punk
Jul 4, 2012

i have absolutely no fucking money
robots are poo poo and i will destroy them

OwlFancier
Aug 22, 2013

We also already have an ideology that seeks to make machinery more important than the people operating it, it's called capitalism :v:

Why do you keep trying to come up with new ways of describing captialism?

Low Desert Punk
Jul 4, 2012

i have absolutely no fucking money
if robots don't have rights, how can me and my sexbot waifu get married?

Who What Now
Sep 10, 2006

by Azathoth
What do you even want that wouldn't be covered by property protection laws?

OwlFancier
Aug 22, 2013

Replace the human with the machine, legalize the machine, criminalize the human, the machine can't strike, the machine doesn't need paying.

It'd make a great setting for a dystopian cyberpunk novel is all I'm saying.

Ytlaya
Nov 13, 2005

Also I'm pretty sure that if you're going to treat the destruction of service robots as a unique crime (distinct from regular destruction of public property) the same standards would need to apply to every smart phone with Siri (or whatever Google's version is called) installed on it. They are both things humans can pseudo-communicate with and create the limited illusion of sentience.

Adbot
ADBOT LOVES YOU

Who What Now
Sep 10, 2006

by Azathoth

OwlFancier posted:

Replace the human with the machine, legalize the machine, criminalize the human, the machine can't strike, the machine doesn't need paying.

It'd make a great setting for a dystopian cyberpunk novel is all I'm saying.

You mean Shadowrun?

  • Locked thread