Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Blockade
Oct 22, 2008

Fried Watermelon posted:

People don't seem to be concerned with having backdoors and other faults built into things such as the neural lace. Consider that that NSA and CIA both have backdoors and workarounds built into hardware and software in nearly all computers and operating systems. Will you give them direct access to your brain? Not to mention all the vulnerabilities that hackers will eventually find.

Im the average american and they already have direct access to my brain through the TV and the facetoobs.

Adbot
ADBOT LOVES YOU

Who What Now
Sep 10, 2006

by Azathoth
If I get uploaded to The Machine what are the chances that I and a band of plucky adventurers must go out to fight and defeat Roko's Basilisk in order to protect our new digital paradise?

Because I call being the Cyber-Paladin. I don't care if it's considered underpowered, Jeff, I'm doing this for roleplaying purposes and not build optimization!

Broccoli Cat
Mar 8, 2013

"so, am I right in understanding that you're a bigot or aficionado of racist humor?




STAR CITIZEN is for WHITES ONLY!




:lesnick:

Fried Watermelon posted:

People don't seem to be concerned with having backdoors and other faults built into things such as the neural lace. Consider that that NSA and CIA both have backdoors and workarounds built into hardware and software in nearly all computers and operating systems. Will you give them direct access to your brain? Not to mention all the vulnerabilities that hackers will eventually find.


Consider that those agencies already have tons of information on you but they don't give a poo poo.

Now imagine nobody gives a poo poo globally or individually, because they're too smart to give a poo poo.

Mulva
Sep 13, 2011
It's about time for my once per decade ban for being a consistently terrible poster.
That's stupid, if you could make anyone into whatever you wanted nobody is useless or forgettable. They are all assets, and indeed all equally useful assets. That would be the point of the exercise from your perspective. Making people machine gods. Ok, well, turns out everyone is super patriotic American machine gods, working for the good of the status quo as we define it. Because we are the ones programming the machine gods.

Broccoli Cat
Mar 8, 2013

"so, am I right in understanding that you're a bigot or aficionado of racist humor?




STAR CITIZEN is for WHITES ONLY!




:lesnick:

Mulva posted:

That's stupid, if you could make anyone into whatever you wanted nobody is useless or forgettable. They are all assets, and indeed all equally useful assets. That would be the point of the exercise from your perspective. Making people machine gods. Ok, well, turns out everyone is super patriotic American machine gods, working for the good of the status quo as we define it. Because we are the ones programming the machine gods.


no it turns everyone into this


Mulva
Sep 13, 2011
It's about time for my once per decade ban for being a consistently terrible poster.

Broccoli Cat posted:

no it turns everyone into this




No, it turns them into that with truck nuts hanging off the side because I loving say so. Now photoshop truck nuts on that.

Flowers For Algeria
Dec 3, 2005

I humbly offer my services as forum inquisitor. There is absolutely no way I would abuse this power in any way.


Uploading myself to the cloud will give me a good reason to forever remain a sexless virgin

Fried Watermelon
Dec 29, 2008


Broccoli Cat posted:

Consider that those agencies already have tons of information on you but they don't give a poo poo.

Now imagine nobody gives a poo poo globally or individually, because they're too smart to give a poo poo.

Yeah they have tons of info on me but now they had read write access directly to my brain

Broccoli Cat
Mar 8, 2013

"so, am I right in understanding that you're a bigot or aficionado of racist humor?




STAR CITIZEN is for WHITES ONLY!




:lesnick:

Flowers For Algeria posted:

Uploading myself to the cloud will give me a good reason to forever remain a sexless virgin


like you have any choice in that

WrenP-Complete
Jul 27, 2012

https://twitter.com/DiscoverMag/status/870371227554246657

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!
An actual philosopher is taking my poo poo seriously

2017 what a year

http://schwitzsplinters.blogspot.co.uk/2017/06/the-social-role-defense-of-robot-rights.html

quote:

Robot rights cheap yo.

Cheap: Eripsa's argument for robot rights doesn't require that robots have any conscious experiences, any feelings, any reinforcement learning, or (maybe) any cognitive processing at all. Most other defenses of the moral status of robots assume, implicitly or explicitly, that robots who are proper targets of moral concern will exist only in the future, once they have cognitive features similar to humans or at least similar to non-human vertebrate animals.

In contrast, Eripsa argues that robots already have rights -- actual robots that currently exist, even simple robots.

His core argument is this:

1. Some robots are already "social participants" deeply incorporated into our social order.

2. Such deeply incorporated social participants deserve social respect and substantial protections -- "rights" -- regardless of whether they are capable of interior mental states like joy and suffering.

Let's start with some comparison cases. Eripsa mentions corpses and teddy bears. We normally treat corpses with a certain type of respect, even though we think they themselves aren't capable of states like joy and suffering. And there's something that seems at least a little creepy about abusing a teddy bear, even though it can't feel pain.

You could explain these reactions without thinking that corpses and teddy bears have rights. Maybe it's the person who existed in the past, whose corpse is now here, who has rights not to be mishandled after death. Or maybe the corpse's relatives and friends have the rights. Maybe what's creepy about abusing a teddy bear is what it says about the abuser, or maybe abusing a teddy harms the child whose bear it is.

All that is plausible, but another way of thinking emphasizes the social roles that corpses and teddy bears play and the importance to our social fabric (arguably) of our treating them in certain ways and not in other ways. Other comparisons might be: flags, classrooms, websites, parks, and historic buildings. Destroying or abusing such things is not morally neutral. Arguably, mistreating flags, classrooms, websites, parks, or historic buildings is a harm to society -- a harm that does not reduce to the harm of one or a few specific property owners who bear the relevant rights.


Arguably, the destruction of hitchBOT was like that. HitchBOT was cute ride-hitching robot, who made it across the length of Canada but who was destroyed by pranksters in Philadelphia when its creators sent it to repeat the feat in the U.S. Its destruction not only harmed its creators and owners, but also the social networks of hitchBOT enthusiasts who were following it and cheering it on.

It might seem overblown to say that a flag or a historic building has rights, even if it's true that flags and historic buildings in some sense deserve respect. If this is all there is to "robot rights", then we have a very thin notion of rights. Eripsa isn't entirely explicit about it, but I think he wants more than that.

Here's the thing that makes the robot case different: Unlike flags, buildings, teddy bears, and the rest, robots can act. I don't mean anything too fancy here by "act". Maybe all I mean or need to mean is that it's reasonable to take the "intentional stance" toward them. It's reasonable to treat them as though they had beliefs, desires, intentions, goals -- and that adds a new richer dimension, maybe different in kind, to their role as nodes in our social network.

Maybe that new dimension is enough to warrant using the term "rights". Or maybe not. I'm inclined to think that whatever rights existing (non-conscious, not cognitively sophisticated) robots have remain derivative on us -- like the "rights" of flags and historic buildings. Unlike human beings and apes, such robots have no intrinsic moral status, independent of their role in our social practices. To conclude otherwise would require more argument or a different argument than Eripsa gives.

Robot rights cheap! That's good. I like cheap. Discount knock-off rights! If you want luxury rights, though, you'll have to look somewhere else (for now).

This is all in response to my robot rights video here: https://www.youtube.com/watch?v=TUMIxBnVsGc

rudatron
May 31, 2011

by Fluffdaddy
A truly damning indictment of modern philosophy

Who What Now
Sep 10, 2006

by Azathoth
That sure would go great in it's own thread, Eripsa.

Broccoli Cat
Mar 8, 2013

"so, am I right in understanding that you're a bigot or aficionado of racist humor?




STAR CITIZEN is for WHITES ONLY!




:lesnick:

Eripsa posted:

ponderous wall-of-text ape-grappling with ape ethics


only human consciousness has "rights"

and we must program our mechanical overlords to allow this imaginary thing to continue while we become machines ourselves.

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Broccoli Cat posted:

only human consciousness has "rights"

and we must program our mechanical overlords to allow this imaginary thing to continue while we become machines ourselves.

Why should this imaginary thing continue? What intrinsic value or purpose does it have, on your view? You've kept dodging my attempts at some answer.

Raine
Apr 30, 2013

ACCELERATIONIST SUPERDOOMER



Eripsa posted:

This post is stupid. A photonic computer can calculate 1+1+1+... extremely fast and it is completely boring. Intelligence is not about processing speed.

Intelligence is about goal acquisition.


Something is more intelligent when it is better able to achieve its goals. The reason it is hard to compare human intelligence to (say) cat intelligence is that humans and cats have different goals, and we're both very good at achieving them.


I'm more intelligent when I'm more effective at accomplishing my goals. To build artificial intelligence, we need to build systems that are incredibly capable agents. And capability doesn't depend on processing speed. A computer can be very fast, but not effective at anything. Being effective means hooking yourself into other domains of action and control. Like, I could have a child, and that child could literally be the smartest neural architecture on the planet, but if that child grows up in an environment of poverty and oppression it will amount to nothing. Speed doesn't matter. A superintelligent photonic AI left to rot in an interstellar void doesn't mean poo poo.
You've been making some compelling arguments in this thread, but I don't understand your reasoning here.

You specifically say something is more intelligent when it is better able to achieve its goals. As a thought experiment, pretend we have two people who are exact clones with the same life experiences. The first clone has neurons that fire slower than the second clone's neurons. Both of them can accomplish the same goals, but it takes longer for the first clone because of the slower processing speed. The second clone can better accomplish his goals than the first. Under your definition of intelligence, the one with the faster firing neurons is more intelligent. Which goes completely against your argument that processing speed has nothing to do with intelligence.

My idea is that processing speed and goal acquisition go hand-in-hand. You obviously need both to be intelligent, but you can't ignore one in favor of the other. If there was a being that could solve any problem given to it, but it took a million years to process, it would be effectively unintelligent. Processing speed is important.

Also, saying other people's posts are stupid is poor form. Yuli Ban had a point. If we had a photonic computer programmed the same as a human brain (this is purely hypothetical), that individual would be able to effectively think much, much faster.

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop
Maybe there's an invisible dimension of super fast creatures all around us with problems that only exist on tiny timescales, that say if there was a being that could solve any problem given to it, but it took a million microseconds to process, it would be effectively unintelligent??

stone cold
Feb 15, 2014

Who What Now posted:

That sure would go great in it's own thread, Eripsa.

it's too late

broccoli cat and eripsa have become one

the power of the mind!

Raine
Apr 30, 2013

ACCELERATIONIST SUPERDOOMER



Dumb Lowtax posted:

Maybe there's an invisible dimension of super fast creatures all around us with problems that only exist on tiny timescales, that say if there was a being that could solve any problem given to it, but it took a million microseconds to process, it would be effectively unintelligent??
They would be relatively unintelligent in their own dimension if they were incapable of solving problems fast enough to be able to react to their environment.

The point I was trying to make though is that thinking faster would make us effectively more intelligent, as we would be able to process more information and by extension able to tackle more problems with that information. I should add that I personally don't see intelligence as a static variable, but something that grows both with the amount of information you have and the ability to use information to solve problems. This isn't even taking into account social aspects of intelligence but on a basic level I hope I'm making sense here.

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Funion posted:

You've been making some compelling arguments in this thread, but I don't understand your reasoning here.

You specifically say something is more intelligent when it is better able to achieve its goals. As a thought experiment, pretend we have two people who are exact clones with the same life experiences. The first clone has neurons that fire slower than the second clone's neurons. Both of them can accomplish the same goals, but it takes longer for the first clone because of the slower processing speed. The second clone can better accomplish his goals than the first. Under your definition of intelligence, the one with the faster firing neurons is more intelligent. Which goes completely against your argument that processing speed has nothing to do with intelligence.

My idea is that processing speed and goal acquisition go hand-in-hand. You obviously need both to be intelligent, but you can't ignore one in favor of the other. If there was a being that could solve any problem given to it, but it took a million years to process, it would be effectively unintelligent. Processing speed is important.

Also, saying other people's posts are stupid is poor form. Yuli Ban had a point. If we had a photonic computer programmed the same as a human brain (this is purely hypothetical), that individual would be able to effectively think much, much faster.

Have you ever worked with computer hardware? I can have a blazing fast, top of the line chip, but if I stick it in a junk rig it will be bottlenecked by the slowest part it's connected to.

So let's say I have your two machines with identical processing power, but one runs twice as fast. I give them the job of movie reviews, and set them both in front of a reel-to-reel projector playing at 24 fps. The first machine is able to process each frame and integrate it in exactly enough time to be ready for the next frame; in other words, it process 24 fps. The machine running twice as fast would, therefore, be capable of processing 48 fps. So we might think naively that it can watch twice as many movies, produce twice as many reviews, etc. But they are both watching the same 24fps reel to reel. So the second much faster computer gets an input frame, processes it, and then has to wait around the same amount of time before the next frame shows up.

This is a pretty straightforward example where you can swap out one system for another that is dramatically better and still see no improvement whatsoever in results.

Processing speed can help acquire some goals. Other goals require time. You can't rush a meal recipe; turning the oven up twice as high won't cook a meal twice as fast. Nature generally doesn't favor speediness, it favors success. Some animals can thrive at extremely slow timescales (think sloth, cyclical cicada, dehydrated tardigrades, etc). Mammals are more intelligent creatures, but have long gestation periods, and length of gestation doesn't correlate with intelligence. Moving fast is one strategy, but it's not the only one, and it's no guarantee of success. Nature doesn't optimize for speed, it optimizes for success, and there are many routes to it.

Raine
Apr 30, 2013

ACCELERATIONIST SUPERDOOMER



Eripsa posted:

Have you ever worked with computer hardware? I can have a blazing fast, top of the line chip, but if I stick it in a junk rig it will be bottlenecked by the slowest part it's connected to.

So let's say I have your two machines with identical processing power, but one runs twice as fast. I give them the job of movie reviews, and set them both in front of a reel-to-reel projector playing at 24 fps. The first machine is able to process each frame and integrate it in exactly enough time to be ready for the next frame; in other words, it process 24 fps. The machine running twice as fast would, therefore, be capable of processing 48 fps. So we might think naively that it can watch twice as many movies, produce twice as many reviews, etc. But they are both watching the same 24fps reel to reel. So the second much faster computer gets an input frame, processes it, and then has to wait around the same amount of time before the next frame shows up.

This is a pretty straightforward example where you can swap out one system for another that is dramatically better and still see no improvement whatsoever in results.

Processing speed can help acquire some goals. Other goals require time. You can't rush a meal recipe; turning the oven up twice as high won't cook a meal twice as fast. Nature generally doesn't favor speediness, it favors success. Some animals can thrive at extremely slow timescales (think sloth, cyclical cicada, dehydrated tardigrades, etc). Mammals are more intelligent creatures, but have long gestation periods, and length of gestation doesn't correlate with intelligence. Moving fast is one strategy, but it's not the only one, and it's no guarantee of success. Nature doesn't optimize for speed, it optimizes for success, and there are many routes to it.
But if you could think faster, you might not be able to do things faster that are physically impossible to speed up, like cooking, but you could certainly come to conclusions faster and work out more abstract problems at a quicker pace.

I think of it like this. If you could think twice as fast, it would probably seem like time had slowed down to half its normal speed. This would let you process information faster and come to conclusions quicker. In reference to your fps analogy, physically we would be on the same pace as anyone else, but we would have solved the problems in our heads twice as fast as anyone else. This lets us solve problems at a higher level than anyone who thinks at a normal speed.

Debating, scientific research, and writing are a few examples that come to mind where thinking twice as fast would help solve problems.

Anyways, since this thread is all about the future, I imagine if we can make our brains think twice as fast, we can make our limbs react twice as fast, which actually would let us do things physically faster. (Although not cooking, to be fair...)

Who What Now
Sep 10, 2006

by Azathoth

Eripsa posted:

Have you ever worked with computer hardware? I can have a blazing fast, top of the line chip, but if I stick it in a junk rig it will be bottlenecked by the slowest part it's connected to.

So let's say I have your two machines with identical processing power, but one runs twice as fast. I give them the job of movie reviews, and set them both in front of a reel-to-reel projector playing at 24 fps. The first machine is able to process each frame and integrate it in exactly enough time to be ready for the next frame; in other words, it process 24 fps. The machine running twice as fast would, therefore, be capable of processing 48 fps. So we might think naively that it can watch twice as many movies, produce twice as many reviews, etc. But they are both watching the same 24fps reel to reel. So the second much faster computer gets an input frame, processes it, and then has to wait around the same amount of time before the next frame shows up.

This is a pretty straightforward example where you can swap out one system for another that is dramatically better and still see no improvement whatsoever in results.

Processing speed can help acquire some goals. Other goals require time. You can't rush a meal recipe; turning the oven up twice as high won't cook a meal twice as fast. Nature generally doesn't favor speediness, it favors success. Some animals can thrive at extremely slow timescales (think sloth, cyclical cicada, dehydrated tardigrades, etc). Mammals are more intelligent creatures, but have long gestation periods, and length of gestation doesn't correlate with intelligence. Moving fast is one strategy, but it's not the only one, and it's no guarantee of success. Nature doesn't optimize for speed, it optimizes for success, and there are many routes to it.

Except the second computer could very easily watch twice as many movies by watching two of them simultaneously and alternating it's focus between the two of them for each of it's own "frames". Again, processing speed makes one of them better.

WrenP-Complete
Jul 27, 2012

I'm not accustomed to reading philosophy. Here, Schwitzgebel wrote:

quote:

Here's the thing that makes the robot case different: Unlike flags, buildings, teddy bears, and the rest, robots can act. I don't mean anything too fancy here by "act". Maybe all I mean or need to mean is that it's reasonable to take the "intentional stance" toward them. It's reasonable to treat them as though they had beliefs, desires, intentions, goals -- and that adds a new richer dimension, maybe different in kind, to their role as nodes in our social network.

Maybe that new dimension is enough to warrant using the term "rights". Or maybe not. I'm inclined to think that whatever rights existing (non-conscious, not cognitively sophisticated) robots have remain derivative on us -- like the "rights" of flags and historic buildings. Unlike human beings and apes, such robots have no intrinsic moral status, independent of their role in our social practices. To conclude otherwise would require more argument or a different argument than Eripsa gives.

Robot rights cheap! That's good. I like cheap. Discount knock-off rights! If you want luxury rights, though, you'll have to look somewhere else (for now).

I'm not sure then what rights this other philosopher agrees that robots should have? What are the rights of flags and historic buildings?

Who What Now
Sep 10, 2006

by Azathoth

WrenP-Complete posted:

I'm not accustomed to reading philosophy. Here, Schwitzgebel wrote:


I'm not sure then what rights this other philosopher agrees that robots should have? What are the rights of flags and historic buildings?

This is a problem that I have with Eripsa's argument. He claims that the definition of rights doesn't matter, and yet philosophical discussion are very much discussions about definitions. He jumped the gun with his first video; he should have clearly laid out how he was defining rights and exactly which rights he believes robots should have before going into why they should have those rights. In the YouTube intellectual thread I pointed out that robots already have some protections, or "rights", under the law in the form of property rights of owners.

I was accused of being a slavery apologist for mentioning this.

So, hopefully, Eripsa will at some point go back and lay the foundations that his current arguments will rest on. Until then :shrug:

WrenP-Complete
Jul 27, 2012

Who What Now posted:

This is a problem that I have with Eripsa's argument. He claims that the definition of rights doesn't matter, and yet philosophical discussion are very much discussions about definitions. He jumped the gun with his first video; he should have clearly laid out how he was defining rights and exactly which rights he believes robots should have before going into why they should have those rights. In the YouTube intellectual thread I pointed out that robots already have some protections, or "rights", under the law in the form of property rights of owners.

I was accused of being a slavery apologist for mentioning this.

So, hopefully, Eripsa will at some point go back and lay the foundations that his current arguments will rest on. Until then :shrug:

My understanding of third generation human rights is that they are aspirational, broad, loosely defined and communal. In that sense we can ask for robots to have a right to self determination for example. But (as Cranston, other authors, and forum members have pointed out) that may mean little if there is no enforcement power.

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Funion posted:

But if you could think faster, you might not be able to do things faster that are physically impossible to speed up, like cooking, but you could certainly come to conclusions faster and work out more abstract problems at a quicker pace.

I think of it like this. If you could think twice as fast, it would probably seem like time had slowed down to half its normal speed. This would let you process information faster and come to conclusions quicker. In reference to your fps analogy, physically we would be on the same pace as anyone else, but we would have solved the problems in our heads twice as fast as anyone else. This lets us solve problems at a higher level than anyone who thinks at a normal speed.

My argument again is that processing speed is not always the bottleneck to problem solving. Often it isn't. In my desktop, hard drive read-write speeds are a far more constraining bottleneck than CPU speed, and ratcheting up CPU speed twice as fast won't change that.

Who What Now posted:

Except the second computer could very easily watch twice as many movies by watching two of them simultaneously and alternating it's focus between the two of them for each of it's own "frames". Again, processing speed makes one of them better.

See, this is a non-sequitur. Just because one machine processes faster doesn't mean there's now a second reel-to-reel for showing twice as many movies. Nothing about the world or it's pace at delivering frames has changed. You're still only getting 24fps. If you process any faster, you're just creating dead time for yourself.

Again, this is why evolution does not optimize for processing speed as a general rule: it's not a guarantee of success. If intelligence = biological success, then every animal would be brilliant. But things like plants still exist, because there are plenty of domains to exploit where intelligence isn't worth very much. Giving a flower the capacity to compute complex floating point operations will, generally speaking, not assist in its capacity to be a successful flower.

LaserShark
Oct 17, 2007

It's over, idiot. You're gonna die here and now, and the last words out of your mouth will have been 'poop train.'
Look, all I want to know is if I will be able to turn into a car. If I can't transform the deal's off.

WrenP-Complete
Jul 27, 2012

LaserShark posted:

Look, all I want to know is if I will be able to turn into a car. If I can't transform the deal's off.

This username/post combination is fantastic. :golfclap:

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

WrenP-Complete posted:

I'm not sure then what rights this other philosopher agrees that robots should have? What are the rights of flags and historic buildings?

There are special rules for how to treat these things, and special punishments for violations. For instance, a city might want to tear down an old dilapidated building to revitalize a downtown area. This is normally within their political power. But if that building is declared a historic site such a decision might be blocked. Declaring a site historic (or a world heritage site, etc), is a way of acknowledging the social importance of these spaces, and to keep them safe and protected.

Who What Now posted:

This is a problem that I have with Eripsa's argument. He claims that the definition of rights doesn't matter, and yet philosophical discussion are very much discussions about definitions. He jumped the gun with his first video; he should have clearly laid out how he was defining rights and exactly which rights he believes robots should have before going into why they should have those rights. In the YouTube intellectual thread I pointed out that robots already have some protections, or "rights", under the law in the form of property rights of owners.

I didn't say the definition of rights doesn't matter. I said I'm using a very broad conception of rights that should be compatible with most theories. It is not my intention to offer a controversial theory of rights. Pick your favorite theory of rights. My argument is that we ought to consider robots within the scope of those rights, however defined.

If you have a definition of rights that specifically excludes robots or blocks my position, it is totally relevant and appropriate to bring it up. For instance, someone on facebook responded as follows:

quote:

One obvious problem with this approach is it has a circularity problem. Suppose it establishes that mistreating robots is wrong just like mistreating teddy bears is wrong. It doesn't tell us what constitutes mistreatment of robots. It only tells us that if something strikes us as the mistreatment of a robot, in the same way that something strikes us as a mistreatment of a teddybear, then it will be just as wrong as the mistreatment of the teadybear. but once you put it like that, it's not a very strong thesis.

My response:

quote:

> It doesn't tell us what constitutes mistreatment of robots.

// I don't know that it's possible to talk about the treatment of robots as such, so I can bite this bullet.

My argument is not against any particular kind of mistreatment against robots. Instead, my argument is that some robots may have status as social participants and members of society deserving some social and institutional recognition in virtue of that participation.

Literally the day after I published this video, the Mayor of San Francisco threatened to issue a ban against delivery robots. My argument does not imply that the ban is good or bad, but simply that it is an issue about the rights of robots to operate in public spaces.

edit: for the record, I think the ban is bad #botALLY

https://www.wired.com/2017/05/san-francisco-wants-ban-delivery-robots-squash-someones-toes/

The upshot is that if your theory of rights depends on some intrinsic feature of the agent, you probably won't be happy with my view.

Who What Now
Sep 10, 2006

by Azathoth

Eripsa posted:

See, this is a non-sequitur. Just because one machine processes faster doesn't mean there's now a second reel-to-reel for showing twice as many movies. Nothing about the world or it's pace at delivering frames has changed. You're still only getting 24fps. If you process any faster, you're just creating dead time for yourself.

You're arguing as if a second reel-to-reel never, ever pops up. But it does, and it happens shockingly often. If speed were not a significant factor than we would never have even evolved past plants, and yet for every sloth and cicada you can point to I can list literally, not figuratively but literally, thousands of examples of species where the quick triumph over the slow.

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Who What Now posted:

You're arguing as if a second reel-to-reel never, ever pops up. But it does, and it happens shockingly often. If speed were not a significant factor than we would never have even evolved past plants, and yet for every sloth and cicada you can point to I can list literally, not figuratively but literally, thousands of examples of species where the quick triumph over the slow.

Sure, fine, but it's not the mere fact of increasing the processing speed of a computer that has produced this new device in the world. That new reel-to-reel was produced in a factory obeying the regular old real-world laws of economics and politics, some of which crawls at glacial pace. Your ultrafast GTX 1000080 doesn't *on its own* change that.

Of course, by the time we have GTX 1000080s, presumably lots of other things about the social, economic, and political world have also changed. This is my point, the whole social fabric comes along for the ride, or it doesn't. There's never a point where superhuman AI takes off like a rocket and leaves us behind. That's part of the lesson of the Chiang story I posted a few pages back. Reposting because it's loving awesome.

Eripsa posted:

Here's a fantastic short story for the thread. Enjoy.

https://www.nature.com/nature/journal/v405/n6786/full/405517a0.html
[quote]Catching crumbs from the table
Ted Chiang

No doubt many of our subscribers remember reading papers whose authors were the first individuals ever to obtain the results they described. But as metahumans began to dominate experimental research, they increasingly made their findings available only via DNT (digital neural transfer), leaving journals to publish second-hand accounts translated into human language.

Without DNT, humans could not fully grasp earlier developments nor effectively utilize the new tools needed to conduct research, while metahumans continued to improve DNT and rely on it even more. Journals for human audiences were reduced to vehicles of popularization, and poor ones at that, as even the most brilliant humans found themselves puzzled by translations of the latest findings.

No one denies the many benefits of metahuman science, but one of its costs to human researchers was the realization that they would probably never make an original contribution to science again. Some left the field altogether, but those who stayed shifted their attentions away from original research and toward hermeneutics: interpreting the scientific work of metahumans.

Textual hermeneutics became popular first, since there were already terabytes of metahuman publications whose translations, although cryptic, were presumably not entirely inaccurate. Deciphering these texts bears little resemblance to the task performed by traditional palaeographers, but progress continues: recent experiments have validated the Humphries decipherment of decade-old publications on histocompatibility genetics.

The availability of devices based on metahuman science gave rise to artefact hermeneutics. Scientists began attempting to 'reverse engineer' these artefacts, their goal being not to manufacture competing products, but simply to understand the physical principles underlying their operation. The most common technique is the crystallographic analysis of nanoware appliances, which frequently provides us with new insights into mechanosynthesis.

The newest and by far the most speculative mode of inquiry is remote sensing of metahuman research facilities. A recent target of investigation is the ExaCollider recently installed beneath the Gobi Desert, whose puzzling neutrino signature has been the subject of much controversy. (The portable neutrino detector is, of course, another metahuman artefact whose operating principles remain elusive.)

The question is, are these worthwhile undertakings for scientists? Some call them a waste of time, likening them to a Native American research effort into bronze smelting when steel tools of European manufacture are readily available. This comparison might be more apt if humans were in competition with metahumans, but in today's economy of abundance there is no evidence of such competition. In fact, it is important to recognize that — unlike most previous low-technology cultures confronted with a high-technology one — humans are in no danger of assimilation or extinction.

There is still no way to augment a human brain into a metahuman one; the Sugimoto gene therapy must be performed before the embryo begins neurogenesis in order for a brain to be compatible with DNT. This lack of an assimilation mechanism means that human parents of a metahuman child face a difficult choice: to allow their child DNT interaction with metahuman culture, and watch him or her grow incomprehensible to them; or else restrict access to DNT during the child's formative years, which to a metahuman is deprivation like that suffered by Kaspar Hauser. It is not surprising that the percentage of human parents choosing the Sugimoto gene therapy for their children has dropped almost to zero in recent years.

As a result, human culture is likely to survive well into the future, and the scientific tradition is a vital part of that culture. Hermeneutics is a legitimate method of scientific inquiry and increases the body of human knowledge just as original research did. Moreover, human researchers may discern applications overlooked by metahumans, whose advantages tend to make them unaware of our concerns.

For example, imagine if research offered hope of a different intelligence-enhancing therapy, one that would allow individuals to gradually 'upgrade' their minds to a level equivalent to that of a metahuman. Such a therapy would offer a bridge across what has become the greatest cultural divide in our species' history, yet it might not even occur to metahumans to explore it; that possibility alone justifies the continuation of human research.

We need not be intimidated by the accomplishments of metahuman science. We should always remember that the technologies that made metahumans possible were originally invented by humans, and they were no smarter than we.

Who What Now
Sep 10, 2006

by Azathoth

Eripsa posted:

Sure, fine, but it's not the mere fact of increasing the processing speed of a computer that has produced this new device in the world. That new reel-to-reel was produced in a factory obeying the regular old real-world laws of economics and politics, some of which crawls at glacial pace. Your ultrafast GTX 1000080 doesn't *on its own* change that.

So? Being faster than the world around you is still an advantage no matter what because as the world slowly accelerates you're still ahead of the curve. Speed is almost always an advantage baring a very select few outliers.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Eripsa posted:

My argument again is that processing speed is not always the bottleneck to problem solving. Often it isn't. In my desktop, hard drive read-write speeds are a far more constraining bottleneck than CPU speed, and ratcheting up CPU speed twice as fast won't change that.
What's an example of a property of intelligent systems that is always the bottleneck to problem solving?

WAR CRIME GIGOLO
Oct 3, 2012

The Hague
tryna get me
for these glutes

Eripsa posted:

Sure, fine, but it's not the mere fact of increasing the processing speed of a computer that has produced this new device in the world. That new reel-to-reel was produced in a factory obeying the regular old real-world laws of economics and politics, some of which crawls at glacial pace. Your ultrafast GTX 1000080 doesn't *on its own* change that.

Of course, by the time we have GTX 1000080s, presumably lots of other things about the social, economic, and political world have also changed. This is my point, the whole social fabric comes along for the ride, or it doesn't. There's never a point where superhuman AI takes off like a rocket and leaves us behind. That's part of the lesson of the Chiang story I posted a few pages back. Reposting because it's loving awesome.

Okay what about it becoming manditory to do the gene therpy required for dmt? Try riding a horse on the highway. Or any roadway that hed to be horse compatable. There will be no place for our biomass. In 1000 years.

WrenP-Complete
Jul 27, 2012

LeoMarr posted:

Okay what about it becoming manditory to do the gene therpy required for dmt? Try riding a horse on the highway. Or any roadway that hed to be horse compatable. There will be no place for our biomass. In 1000 years.

DMT?

Edit: I asked a friend of mine who used to work with/at the UN on human rights issues about this.

WrenP-Complete posted:

My understanding of third generation human rights is that they are aspirational, broad, loosely defined and communal. In that sense we can ask for robots to have a right to self determination for example. But (as Cranston, other authors, and forum members have pointed out) that may mean little if there is no enforcement power.
Here's what she said.

My friend posted:

So this is an interesting point re: aspirational goals. I think 2nd gen also falls under that description because it's about realizing a right rather than protecting a right vested to every individual
I hadn't pondered how technology / AI fits into 3rd gen mostly because I would say it's defined predominantly by the fact that it's ecological and future focused as well as being communal, broad and non western
And so my mind doesn't automatically go from organic/eco --> tech/abstract/mechanical/computer
I do think, though, that 3rd gen leaves a lot of room for animal rights activists and environmentalists to argue that human nights might be extended to other sentient beings, or at least that the ethical, sustainable treatment of other sentient beings is inextricably related to protecting human rights

We are breaking down the sentience component now, she's saying it might depend on the kind of right being extended or the context of our conversation here is. (my database migration at work is taking foreverlong)

Edit2: Here's a page of Ife she suggests is relevant, with the note: "People tend to conflate needs versus rights, and they use that conflation to say that human rights theory simply includes too many frivolous things."

WrenP-Complete fucked around with this message at 18:08 on Jun 2, 2017

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

stone cold posted:

it's too late

broccoli cat and eripsa have become one

the power of the mind!

I've tried several times to engage broccoli cat in direct argument, and he remains evasive.

One of my intellectual heroes is Norbert Wiener, and below is a lovely meme I made for one of my favorite of his quotes. It represents the kind of anti-human nihilism I'm trying to urge you to consider, if not to fully embrace then at least to temper the more echatologically religious aspects of AI futurism.

The quote in the meme is preceded by the one I add in the quote below that bears directly on the arguments in this thread.

quote:

To me, logic and learning and all mental activity have always been incomprehensible as a complete and closed picture and have been understandable only as a process by which man puts himself en rapport with his environment. It is the battle for learning which is significant, and not the victory. Every victory that is absolute is followed at once by the Twilight of the gods, in which the very concept of victory is dissolved in the moment of its attainment.

We are swimming upstream against a great torrent of disorganization, which tends to reduce everything to the heat death of equilibrium and sameness described in the second law of thermodynamics. What Maxwell, Bolzmann and Gibbs meant by this heat death in physics has a counterpart in the ethic of Kierkegaard, who pointed out that we live in a chaotic moral universe.

In this, our main obligation is to establish arbitrary enclaves of order and system. These enclaves will not remain there indefinitely by any momentum of their own after we have once established them. Like the Red Queen, we cannot stay where we are without running as fast as we can.

We are not fighting for a definitive victory in the indefinite future. It is the greatest possible victory to be, to continue to be, and to have been. No defeat can deprive us of the success of having existed for some moment of time in a universe that seems indifferent to us.

This is no defeatism, it is rather a sense of tragedy in a world in which necessity is represented by an inevitable disappearance of differentiation. The declaration of our own nature and the attempt to build an enclave of organization in the face of nature's overwhelming tendency to disorder is an insolence against the gods and the iron necessity that they impose.

Here lies tragedy, but here lies glory too.

https://en.wikipedia.org/wiki/Norbert_Wiener

Eripsa fucked around with this message at 22:10 on Jun 2, 2017

Who What Now
Sep 10, 2006

by Azathoth
^^^^^^
Nothing in this post could be recognized as a meme. How do you mess up memes?

Eripsa posted:

I didn't say the definition of rights doesn't matter. I said I'm using a very broad conception of rights that should be compatible with most theories. It is not my intention to offer a controversial theory of rights. Pick your favorite theory of rights. My argument is that we ought to consider robots within the scope of those rights, however defined.

The most common view of rights that I'm familiar with are that rights are defined on a scale and are granted by society to people, and the most useful definition of personhood I'm aware of relies on the actor being sentient and cognitively capable. By this definition robots are not people and thus are not themselves granted any rights. And any protections they have are an extension of the rights of their owners*. I've yet to see a more useful definition of rights, and if you have one then I feel like you need to make a case for it rather than just assume it as a given that robots apply under most definitions.


*Saying that robots in today's society are objects and property is in no way comparable to chattel slavery of other humans. Please have the barest minimum of respect for my argument by not trying to call me a robo-slaver again.

quote:

If you have a definition of rights that specifically excludes robots or blocks my position, it is totally relevant and appropriate to bring it up. For instance, someone on facebook responded as follows:


My response:


The upshot is that if your theory of rights depends on some intrinsic feature of the agent, you probably won't be happy with my view.

See, you already realize this is a major issue with the foundation of your argument. I don't understand why you're so hesitant to address it head on.

Who What Now fucked around with this message at 19:33 on Jun 2, 2017

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

twodot posted:

What's an example of a property of intelligent systems that is always the bottleneck to problem solving?

Embodiment is a pretty big one. The story is that Watson won jeopardy not because it could answer fastest, but because it could trigger the buzzer faster. Competitive jeopardy players know that part of the strategy is getting in on that buzzer on time.

The cooperation of the environment is a related one. E. coli is one of the most efficient organisms on the planet at self-replication. It spits out copies of it self at something like six times the absolute thermodynamic lower bound on heat production. England says:

quote:

More significantly, these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability. In light of the fact that the bacterium is a complex sensor of its environment that can very effectively adapt itself to growth in a broad range of different environments, we should not be surprised that it is not perfectly optimized for any given one of them. Rather, it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible!

E. coli sure ain't cute like tardigrades, but they're as close to a gray goo machine as you're likely to find.



E. coli represent a clear hazard to human health; they make you sick, and they kill almost half a million people every year. And you know how to deal with this dramatically efficient and hostile replicator? You wash your hands before you eat and after using the bathroom.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Eripsa posted:

The cooperation of the environment is a related one. E. coli is one of the most efficient organisms on the planet at self-replication. It spits out copies of it self at something like six times the absolute thermodynamic lower bound on heat production. England says:
I have no clue what you mean by embodiment, but are you seriously saying you can't imagine a context where "cooperation of the environment" isn't a bottleneck to achieving a goal? When you say environment do you just mean "the totality of physical reality including the actor"? Like use your example: I'm a computer that has a goal of processing as much video as possible and can only process 24 fps, I have a source of video that comes in at 48 fps, creating a backlog, how does the environment being more cooperative increase my throughput?

Adbot
ADBOT LOVES YOU

Eripsa
Jan 13, 2002

Proud future citizen of Pitcairn.

Pitcairn is the perfect place for me to set up my utopia!

Who What Now posted:

The most common view of rights that I'm familiar with are that rights are defined on a scale and are granted by society to people, and the most useful definition of personhood I'm aware of relies on the actor being sentient and cognitively capable. By this definition robots are not people and thus are not themselves granted any rights. And any protections they have are an extension of the rights of their owners*. I've yet to see a more useful definition of rights, and if you have one then I feel like you need to make a case for it rather than just assume it as a given that robots apply under most definitions.

To be fussy and technical, we should distinguish here between "persons" and "humans". A person ("full personhood") is, as you say, sentient and cognitively capable of making their own decisions as an autonomous social agent. I am not arguing in any way that robots are full persons (yet). If anything, my argument is that personhood doesn't matter for rights.

Rights are usually identified with humans, as in the UDHR. Human rights extend to infants, the severely cognitively disabled, and the comatose, and do not in any way depend on "personhood" in the technical sense. Here's the preamble, where you will find no reference whatsoever to sentience or cognitive capacity.

quote:

Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world,

Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind, and the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want has been proclaimed as the highest aspiration of the common people,

Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law,

Whereas it is essential to promote the development of friendly relations between nations,

Whereas the peoples of the United Nations have in the Charter reaffirmed their faith in fundamental human rights, in the dignity and worth of the human person and in the equal rights of men and women and have determined to promote social progress and better standards of life in larger freedom,

Whereas Member States have pledged themselves to achieve, in co-operation with the United Nations, the promotion of universal respect for and observance of human rights and fundamental freedoms,

Whereas a common understanding of these rights and freedoms is of the greatest importance for the full realization of this pledge,

Now, Therefore THE GENERAL ASSEMBLY proclaims THIS UNIVERSAL DECLARATION OF HUMAN RIGHTS as a common standard of achievement for all peoples and all nations, to the end that every individual and every organ of society, keeping this Declaration constantly in mind, shall strive by teaching and education to promote respect for these rights and freedoms and by progressive measures, national and international, to secure their universal and effective recognition and observance, both among the peoples of Member States themselves and among the peoples of territories under their jurisdiction.

So. We might revise your definition of rights as follows:

Who What Now 2.0 posted:

The most common view of rights that I'm familiar with are that rights are defined on a scale and are granted by society to humans.

This would be technically accurate, but most would find this unsatisfying for failure to mention anything about animals or other sentient creatures. So the natural (and also very popular move) goes like this:

Who What Now 2.5 posted:

The most common view of rights that I'm familiar with are that rights are defined on a scale and are granted by society to sentient creatures.

The debate sits here until someone builds a robot that others consider sentient. And absent a solution to the hard problems of consciousness, any proposed solution can be dogmatically ignored.

So without talking about robots at all, I'm proposing we adjust the definition ever so slightly as follows:

Who What Now X: Reborn posted:

The most common view of rights that I'm familiar with are that rights are defined on a scale and are granted by society to its members.

In other words, rights are granted not in virtue of some internal state, but in virtue of membership within the relevant (moral, political, social) communities within which it makes sense to give that agent rights. This talk of membership is inclusive of animal rights concerns but also of larger ecological and environmental concerns outside the domains of creatures-with-nervous-systems, and it provides a natural framing for talking about the rights of some machines.

So look, you can feign some snowflake tears over my comparison of human and robot rights. But the word robot literally derives from a word for slave, and the history of labor, civil, and human rights has marched alongside automation for hundreds of years. Breaking a robot and killing a person are not even approximately equivalent. But these questions are both arising within a shared history and political context. So we should be approaching the question of social role of robots in full light of the long history of our same discussions over these questions concerning humans.

  • Locked thread