|
fishmech posted:There's nothing to learn here dude. You made a bullshit claim, tried to back up with something that doesn't even approach backing it up. Perhaps you should learn to not lie? Perhaps you should learn to read and understand that hyperbole is different than lying. The fact is that yes there are tons of articles calling the robot creepy and unsettling, you've done nothing but say that isn't true despite all the articles I linked. And then you made up some bullshit argument about how every robot will have articles like these (equating random youtubers to the articles I linked), despite the fact that there are robots in the consumer space today that don't have articles talking about how creepy they are (like roombas for example). So go gently caress off with your bullshit, you're not adding anything to the discussion, my goal was to talk about social acceptance of robots and design choices, your goal is to say "nuh uh", so gently caress right off.
|
# ? May 16, 2018 00:10 |
|
|
# ? Jun 10, 2024 23:23 |
|
ElCondemn posted:Please look into Cynthia Breazeal's work, you're just making poo poo up. You haven't done any research and are blatantly just dismissing the work in the field, as if you just inherently know better. You really don't know what you're talking about. That talk is called "rise of the personal robots". Boston Dynamics dog robot is for A) research and B) military. They don't give a poo poo about what it makes a person feel, nor should they. You're conflating two totally different things. If this was a robot to help at a hospital or something, yes you would be very correct, it would need to be comforting, not alien.
|
# ? May 16, 2018 00:45 |
|
Taffer posted:That talk is called "rise of the personal robots". Boston Dynamics dog robot is for A) research and B) military. They don't give a poo poo about what it makes a person feel, nor should they. You're conflating two totally different things. Sorry, the talk isn't the research I was talking about. If you look at the research section of her website you'll see a lot of the work that she and her team have done relating to human/robot interactions. She also has lots of videos explaining commercial applications of the work she's doing in the service and personal assistant industries. As for the robot only being for research and the military, I'm not sure where you got that information. Their original BigDog design was funded by DARPA but the spotmini (the robot in question) goes into production next year for commercial use. That means it's not just for hauling around supplies for soldiers, it would have many functions, especially considering their robotic arm demo. For instance it could be a delivery robot or something, in which case social acceptance of these robots is critical. ElCondemn fucked around with this message at 01:38 on May 16, 2018 |
# ? May 16, 2018 01:28 |
|
Ahem... This is the ideal robotic dog. You may not like it but this is what peak performance looks like.
|
# ? May 16, 2018 02:42 |
|
ElCondemn posted:Sorry, the talk isn't the research I was talking about. If you look at the research section of her website you'll see a lot of the work that she and her team have done relating to human/robot interactions. She also has lots of videos explaining commercial applications of the work she's doing in the service and personal assistant industries. Things for the military often branch out. But even if it ends up doing warehouse work or other corporate busy-work, the same thing applies. It doesn't need to be "socially accepted" any more than a forklift does. Like, we're arguing past each other at this point. I think everyone agrees that something should be approachable if it's going to be used for personal/consumer use, but that's not what this is developed for. It's strictly utilitarian. But that said, I still think you're overblowing how "unsettling it is". It is just a dog-like robot whos legs bend backwards. It's not that weird.
|
# ? May 16, 2018 03:21 |
|
Taffer posted:It doesn't need to be "socially accepted" any more than a forklift does.
|
# ? May 16, 2018 14:01 |
|
ElCondemn posted:Perhaps you should learn to read and understand that hyperbole is different than lying. The fact is that yes there are tons of articles calling the robot creepy and unsettling, you've done nothing but say that isn't true despite all the articles I linked. And then you made up some bullshit argument about how every robot will have articles like these (equating random youtubers to the articles I linked), despite the fact that there are robots in the consumer space today that don't have articles talking about how creepy they are (like roombas for example). So go gently caress off with your bullshit, you're not adding anything to the discussion, my goal was to talk about social acceptance of robots and design choices, your goal is to say "nuh uh", so gently caress right off. But you, are lying. You are making things up and then trying to attach someone else's authority behind that when they're not making a claim remotely similar to what you were trying to defend. Here's an idea: Just stop claiming "everyone" when what you mean is "tiny minorities". And stop making false claims to authority. Your personal hang ups about how the robot sent to kill you is weird because the legs are 0.35% off are not universal. Far more people are concerned about the fact that it's a robot designed for killing. BrandorKP posted:Ahem... Taffer posted:Things for the military often branch out. But even if it ends up doing warehouse work or other corporate busy-work, the same thing applies. It doesn't need to be "socially accepted" any more than a forklift does. It's especially funny because real dogs have legs that bend "backwards". Or is it humans whose legs bend backwards compared to dogs? Makes you think. No matter which way the leg bends, it's a way you'll see among many animal species.
|
# ? May 16, 2018 16:12 |
|
Can I have legal sex with self driving cars? Self directing prostitution for example.
|
# ? May 21, 2018 14:09 |
|
Sex acts with AIs are probably going to be classed as being kind of like bestiality, I'd figure. Well, OK, I desperately hope.
|
# ? May 21, 2018 14:39 |
|
What? Why would you hope for that? edit: is this you IRL?
|
# ? May 21, 2018 15:29 |
|
Because I'm dubious about an AI's capacity to consent.
|
# ? May 21, 2018 17:03 |
|
Oh, you mean like AI AI, like if it's sentient or close to that. Yeah there's gonna be all kinds of ethical issues once AI gets that advanced.
|
# ? May 21, 2018 17:13 |
|
Cicero posted:Oh, you mean like AI AI, like if it's sentient or close to that. Yeah there's gonna be all kinds of ethical issues once AI gets that advanced. I mean, there is going to be a lot of ethical issues but they are going to be more about stuff about how a human can deal with immortal beings with inhuman minds that can move body to body, clone and remerge their minds, have perfect memories, can rewrite or investigate any part of their own brain and do not even necessarily even have barriers between individuals or exist in any actual physical location.
|
# ? May 21, 2018 17:27 |
|
Spacewolf posted:Because I'm dubious about an AI's capacity to consent. Well if we reach that kind of AI I’d be more concerned about Fuckotron 5001 not giving a poo poo about its owner’s consent before carrying out its primary function Better realdolls with improved moaning and mobility are a more likely outcome though.
|
# ? May 21, 2018 19:12 |
|
Spacewolf posted:Because I'm dubious about an AI's capacity to consent. We're not far from really sophisticated AI that can mimic a lot of human behavior and understanding. But we are very far from an AI that's actually self-aware. Like it's not even on the horizon, it's so complex that no one even knows what the prerequisites to it are. Our understanding of the human mind and our technological capabilities are still super far from that.
|
# ? May 21, 2018 19:57 |
|
It seems like there is 4real philosophical questions about how the heck crimes against nonhuman intelligence would even work especially if people were making AIs to order, but talking about it in terms of sex seems like there is no way that conversation wouldn't get gross and creepy super fast.
|
# ? May 21, 2018 21:15 |
|
Taffer posted:We're not far from really sophisticated AI that can mimic a lot of human behavior and understanding. But we are very far from an AI that's actually self-aware. Like it's not even on the horizon, it's so complex that no one even knows what the prerequisites to it are. Our understanding of the human mind and our technological capabilities are still super far from that. It would be better to describe it as, we have no idea how close we are to having a "really self-aware" AI. And because we don't have an idea of how close we are, it's almost certainly a) much farther than it might seem and b) likely to have little connection with various little things on the way.
|
# ? May 21, 2018 21:35 |
|
I am pretty sure we are never going to get a satisfactory answer on what self-awareness/consciousness/qualia even are or do and we are going to just have to exist in a world of more and more advanced machines that can complete more and more humanlike tasks without ever having any sort of magic moment when some vital force is definitively injected into a computer and it just wakes up as a dude.
|
# ? May 21, 2018 22:35 |
|
Owlofcreamcheese posted:I am pretty sure we are never going to get a satisfactory answer on what self-awareness/consciousness/qualia even are or do and we are going to just have to exist in a world of more and more advanced machines that can complete more and more humanlike tasks without ever having any sort of magic moment when some vital force is definitively injected into a computer and it just wakes up as a dude. I like the idea that consciousness is a spectrum, as the complexity increases so does consciousness. We all believe we're conscious but for all we know we're just doing what animals or computers do but at a higher level of complexity.
|
# ? May 21, 2018 22:50 |
|
ElCondemn posted:I like the idea that consciousness is a spectrum, as the complexity increases so does consciousness. We all believe we're conscious but for all we know we're just doing what animals or computers do but at a higher level of complexity. I feel like all the chinese room/p-zombie/blockhead type theory of mind arguments make a good job declaring you can't possibly ever determine if anything, including yourself are or aren't conscious and we probably just need to declare anything that acts like it is might as well be or that it doesn't matter or that nothing is, or some other "who knows" type answer where it's irrelevant to anything.
|
# ? May 21, 2018 23:16 |
|
Not surprised that OOCC considers sex to be gross.
|
# ? May 21, 2018 23:31 |
|
Freakazoid_ posted:Not surprised that OOCC considers sex to be gross. Rape is gross, and conversations where people recreationally try to figure out made up fantasy scenarios to figure out if something is or isn't rape are always gross. And the conversation about the rights on a thing that isn't a human and can be designed any way you want is a good conversation but absolutely shouldn't be had in terms of when it is or isn't raping it.
|
# ? May 21, 2018 23:43 |
|
fishmech posted:It's especially funny because real dogs have legs that bend "backwards". Or is it humans whose legs bend backwards compared to dogs? Makes you think. I'm chiming in to correct the record on dog legs, as I do time to time on these gay forums; Dogs are digitigrades, their legs formation is similar to humans if we were to elevate our heels and walk on the fronts of our feet, they do not bend backwards. Please continue discussing the coming synthetic dawn.
|
# ? May 21, 2018 23:55 |
|
fishmech posted:It would be better to describe it as, we have no idea how close we are to having a "really self-aware" AI. And because we don't have an idea of how close we are, it's almost certainly a) much farther than it might seem and b) likely to have little connection with various little things on the way. I don't recall where, but iirc there were a series of publications predicting heavier than air flight was decades away months before the wright brothers flew their prototype. Deep learning processes as they are now has very little direct or manageable human understanding, more a management of inputs an outputs on a neural matrix that operates very independently an gets broadly tweaked. We consider consciousness to be an emergent quality of the brain, generally describing something as 'emergent' is often a way of saying 'we really don't understand this'. We are still operating a bit blindly on even defining agi, but I wouldn't preclude the chance that it possibly near term or long term given the near global effort to develop it. As it stands a functioning agi could have the potential to be of similar strategic importance as the atomic bomb in 1945; chances are there are factions in the world pushing forward on getting that going asap. Regardless I just own a shitload of amazon, baba an google shares hoping it emerges there an then profit
|
# ? May 22, 2018 00:12 |
|
Grammar-Bolshevik posted:I don't recall where, but iirc there were a series of publications predicting heavier than air flight was decades away months before the wright brothers flew their prototype. "Deep learning" is likely to be one of these guys, who was also around just before the Wright Brothers flew: Also uh, by the time the Wright Brothers were approaching success people knew we could make heavier than air craft that would maintain lift for extended times, knew ways to control them, and just hadn't cracked the proper power-to-weight ratio. It's a terrible comparison to building consciousness machines.
|
# ? May 22, 2018 00:56 |
|
fishmech posted:"Deep learning" is likely to be one of these guys, who was also around just before the Wright Brothers flew: someone somewhere posted:
I'm not going to tell you when something is going to be invented, but you have a nice list of people smarter than you saying poo poo that didn't pan out. The least you could do is just leave it an unknown, because that is what a good person should do, rather than be a salty weeb about it.
|
# ? May 22, 2018 04:39 |
|
Grammar-Bolshevik posted:I'm not going to tell you when something is going to be invented, but you have a nice list of people smarter than you saying poo poo that didn't pan out. Cool list of quotes but they prove absolutely nothing. Additionally many were true, or were only meant for a limited horizon or the immediate situation during which they were true. Consider that the Bill Gates quote is outright false - he never said it, and the best available sources on if it was said at all indicate that it was someone referring to the year it was being released, 1981. Incidentally in 1981, most personal computers being sold had well less than the 640 KB of potential contiguous memory the IBM PC would have. Consider: it was still impressive in 1982 that the Commodore 64 released with a whole 64 KB of RAM installed (~39k available to the user by default), and the then-current Apple II Plus models did not ship from Apple with more than 64 KB (and it was quite difficult to meaningfully use more than 64 KB on the architecture anyway, though later Apple II models could be expanded to several megabytes). The high end IBM PC models with 256 KB up to 512 KB installed in them were very advanced machines for the time, and indeed little was developed that had a problem with the 640 KB limit, largely due to few people and businesses having the full megabyte plus of RAM to have 640 KB contiguous conventional memory accessible. Edit: And you should also consider that Edison was well aware alternating current was viable, but he had the patents on vital direct current poo poo so of course he's going to bag on the competition. So that one might as well be a quote of Pepsi saying Coke doesn't taste good or whatever. The DEC guy was not talking about personal computers, but rather the archetypal "smart home" computer that would control everything from the HVAC to cooking dinner etc. That poo poo still doesn't work right without very careful setup and attention despite another 40 years behind it. fishmech fucked around with this message at 05:07 on May 22, 2018 |
# ? May 22, 2018 04:58 |
|
They thought the same thing about the differential equations paradigm in systems. Ideological hubris is dangerous. Failure to recognize a new way of solving problems and thinking about solutions is also dangerous.
|
# ? May 22, 2018 05:12 |
|
quote:The experts that Grace and co coopted were academics and industry experts who gave papers at the International Conference on Machine Learning in July 2015 and the Neural Information Processing Systems conference in December 2015. These are two of the most important events for experts in artificial intelligence, so it’s a good bet that many of the world’s experts were on this list. https://www.technologyreview.com/s/607970/experts-predict-when-artificial-intelligence-will-exceed-human-performance/
|
# ? May 22, 2018 13:06 |
|
Spacewolf posted:Because I'm dubious about an AI's capacity to consent. Can an AI consent to being created in the first place? What should the lifespan be of the first AI?
|
# ? May 22, 2018 19:20 |
|
Mozi posted:Can an AI consent to being created in the first place?
|
# ? May 22, 2018 19:28 |
|
BrandorKP posted:They thought the same thing about the differential equations paradigm in systems. Ideological hubris is dangerous. Failure to recognize a new way of solving problems and thinking about solutions is also dangerous. Any good sources to read on what the heck the "differential equations paradigm" is?
|
# ? May 22, 2018 19:28 |
|
SaTaMaS posted:Any good sources to read on what the heck the "differential equations paradigm" is? Basically a couple of things all converged. Systems thinking evolves out of designing / modeling steam power plants and it starts heavy first law / second law thermo. Rocketry needs controls theory which uses differential equations to get solved. The controls theory and systems theory get combined. You can use the differential equations to solve the controls problems in the modeled steam system and make real analog automation for the steam plants with pneumatic and spring controlled valves. You then have a way of describing the systems (which have stocks and flows) and the feedback and control elements inside the systems (the controls part). Oh poo poo this type of modeling can be applied all over the loving place in society and business. It takes off in places like MIT. Anything with stocks and flows, feedback loops, time delays can now be described. Rand Corp ends up doing an assload of it. Hubris in thinking we can know, caused by the advances in Systems Dynamics contributes to cluster fucks like the Vietnam war. "Contributes" in a many of the same people, a direct relationship, sort of way. Look for "System Dynamics"
|
# ? May 22, 2018 19:51 |
|
Preliminary report for the Uber crash is out, here's an excerpt from the summary:quote:According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).[2] According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
|
# ? May 24, 2018 15:23 |
|
NTSB report posted:According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator. So the computer won't do it and won't tell the operator to do it...Hm! Interesting strategy Uber! So how is the operator supposed to know the computer doesn't have it under control? ...probably telepathy.
|
# ? May 24, 2018 16:26 |
|
Not arguing that the system isn't flawed, but I'm pretty sure the driver is supposed to be aware and ready to brake. They aren't supposed to wait for the computer to stop driving, they're supposed to act immediately during an emergency just as they would if they were driving normally.
|
# ? May 24, 2018 16:32 |
|
Spacewolf posted:So the computer won't do it and won't tell the operator to do it...Hm! Interesting strategy Uber! So how is the operator supposed to know the computer doesn't have it under control? It seems unclear if that is talking about the Uber self driving thing or the Volvo City Safety braking thing. The volvo city safety emergency braking stuff is off because that would be legitimately crazy to have at the same time.
|
# ? May 24, 2018 16:35 |
|
Paradoxish posted:Not arguing that the system isn't flawed, but I'm pretty sure the driver is supposed to be aware and ready to brake. They aren't supposed to wait for the computer to stop driving, they're supposed to act immediately during an emergency just as they would if they were driving normally. The operator is supposed to be checking the self driving report console thing in the middle of the dash. That requires them to take their eyes off the road.
|
# ? May 24, 2018 16:53 |
|
Owlofcreamcheese posted:It seems unclear if that is talking about the Uber self driving thing or the Volvo City Safety braking thing. It isn't unclear at all from the quote. Read it again: quote:According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator. They even give their rationale for emergency braking maneuvers being disabled; to reduce erratic behavior. Or do you think the article is grossly misrepresenting what uber said? Cuz from the entire article there is no way to come to the conclusion that they might have meant that they disabled Volvo's equipment. Over and over again it says that the operator is the one who is supposed to do the braking. *Edit: And why consistently give the benefit of the doubt to a lovely company that doesn't give a poo poo about safety in any respect of its business? Uber killed someone because of this and there's no reason to believe that there aren't many different areas where uber failed to do its due diligence to ensure its system was operating safely. Raldikuk fucked around with this message at 17:05 on May 24, 2018 |
# ? May 24, 2018 17:02 |
|
|
# ? Jun 10, 2024 23:23 |
|
Paradoxish posted:Not arguing that the system isn't flawed, but I'm pretty sure the driver is supposed to be aware and ready to brake. They aren't supposed to wait for the computer to stop driving, they're supposed to act immediately during an emergency just as they would if they were driving normally. Nobody wants this though. The whole point of a self driving car is that you can sleep, gently caress around on your phone or masturbate instead of driving. If you’re still required to constantly pay attention and need to have your hands and feet on the wheel and brakes at all times the convenience gain over a regular car is next to zero.
|
# ? May 24, 2018 17:10 |