Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ElCondemn
Aug 7, 2005


fishmech posted:

There's nothing to learn here dude. You made a bullshit claim, tried to back up with something that doesn't even approach backing it up. Perhaps you should learn to not lie?

Perhaps you should learn to read and understand that hyperbole is different than lying. The fact is that yes there are tons of articles calling the robot creepy and unsettling, you've done nothing but say that isn't true despite all the articles I linked. And then you made up some bullshit argument about how every robot will have articles like these (equating random youtubers to the articles I linked), despite the fact that there are robots in the consumer space today that don't have articles talking about how creepy they are (like roombas for example). So go gently caress off with your bullshit, you're not adding anything to the discussion, my goal was to talk about social acceptance of robots and design choices, your goal is to say "nuh uh", so gently caress right off.

Adbot
ADBOT LOVES YOU

Taffer
Oct 15, 2010


ElCondemn posted:

Please look into Cynthia Breazeal's work, you're just making poo poo up. You haven't done any research and are blatantly just dismissing the work in the field, as if you just inherently know better. You really don't know what you're talking about.

edit: here's her website in case you have trouble finding the relevant information http://cynthiabreazeal.media.mit.edu
she's also not the only one in the field doing this research, there are lots of resources if you are actually interested in learning about this stuff.

That talk is called "rise of the personal robots". Boston Dynamics dog robot is for A) research and B) military. They don't give a poo poo about what it makes a person feel, nor should they. You're conflating two totally different things.

If this was a robot to help at a hospital or something, yes you would be very correct, it would need to be comforting, not alien.

ElCondemn
Aug 7, 2005


Taffer posted:

That talk is called "rise of the personal robots". Boston Dynamics dog robot is for A) research and B) military. They don't give a poo poo about what it makes a person feel, nor should they. You're conflating two totally different things.

If this was a robot to help at a hospital or something, yes you would be very correct, it would need to be comforting, not alien.

Sorry, the talk isn't the research I was talking about. If you look at the research section of her website you'll see a lot of the work that she and her team have done relating to human/robot interactions. She also has lots of videos explaining commercial applications of the work she's doing in the service and personal assistant industries.

As for the robot only being for research and the military, I'm not sure where you got that information. Their original BigDog design was funded by DARPA but the spotmini (the robot in question) goes into production next year for commercial use. That means it's not just for hauling around supplies for soldiers, it would have many functions, especially considering their robotic arm demo. For instance it could be a delivery robot or something, in which case social acceptance of these robots is critical.

ElCondemn fucked around with this message at 01:38 on May 16, 2018

Bar Ran Dun
Jan 22, 2006
Ahem...

This is the ideal robotic dog. You may not like it but this is what peak performance looks like.

Taffer
Oct 15, 2010


ElCondemn posted:

Sorry, the talk isn't the research I was talking about. If you look at the research section of her website you'll see a lot of the work that she and her team have done relating to human/robot interactions. She also has lots of videos explaining commercial applications of the work she's doing in the service and personal assistant industries.

As for the robot only being for research and the military, I'm not sure where you got that information. Their original BigDog design was funded by DARPA but the spotmini (the robot in question) goes into production next year for commercial use. That means it's not just for hauling around supplies for soldiers, it would have many functions, especially considering their robotic arm demo. For instance it could be a delivery robot or something, in which case social acceptance of these robots is critical.

Things for the military often branch out. But even if it ends up doing warehouse work or other corporate busy-work, the same thing applies. It doesn't need to be "socially accepted" any more than a forklift does.

Like, we're arguing past each other at this point. I think everyone agrees that something should be approachable if it's going to be used for personal/consumer use, but that's not what this is developed for. It's strictly utilitarian. But that said, I still think you're overblowing how "unsettling it is". It is just a dog-like robot whos legs bend backwards. It's not that weird.

Rent-A-Cop
Oct 15, 2004

I posted my food for USPOL Thanksgiving!

Taffer posted:

It doesn't need to be "socially accepted" any more than a forklift does.
Anyone who thinks a forklift is socially acceptable hasn't seen Staplerfahrer Klaus.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

ElCondemn posted:

Perhaps you should learn to read and understand that hyperbole is different than lying. The fact is that yes there are tons of articles calling the robot creepy and unsettling, you've done nothing but say that isn't true despite all the articles I linked. And then you made up some bullshit argument about how every robot will have articles like these (equating random youtubers to the articles I linked), despite the fact that there are robots in the consumer space today that don't have articles talking about how creepy they are (like roombas for example). So go gently caress off with your bullshit, you're not adding anything to the discussion, my goal was to talk about social acceptance of robots and design choices, your goal is to say "nuh uh", so gently caress right off.

But you, are lying. You are making things up and then trying to attach someone else's authority behind that when they're not making a claim remotely similar to what you were trying to defend.

Here's an idea: Just stop claiming "everyone" when what you mean is "tiny minorities". And stop making false claims to authority. Your personal hang ups about how the robot sent to kill you is weird because the legs are 0.35% off are not universal. Far more people are concerned about the fact that it's a robot designed for killing.


BrandorKP posted:

Ahem...

This is the ideal robotic dog. You may not like it but this is what peak performance looks like.

:agreed:

Taffer posted:

Things for the military often branch out. But even if it ends up doing warehouse work or other corporate busy-work, the same thing applies. It doesn't need to be "socially accepted" any more than a forklift does.

Like, we're arguing past each other at this point. I think everyone agrees that something should be approachable if it's going to be used for personal/consumer use, but that's not what this is developed for. It's strictly utilitarian. But that said, I still think you're overblowing how "unsettling it is". It is just a dog-like robot whos legs bend backwards. It's not that weird.

It's especially funny because real dogs have legs that bend "backwards". Or is it humans whose legs bend backwards compared to dogs? Makes you think.

No matter which way the leg bends, it's a way you'll see among many animal species.

McGiggins
Apr 4, 2014

by R. Guyovich
Lipstick Apathy
Can I have legal sex with self driving cars?

Self directing prostitution for example.

Spacewolf
May 19, 2014
Sex acts with AIs are probably going to be classed as being kind of like bestiality, I'd figure. Well, OK, I desperately hope.

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
What? Why would you hope for that?

edit: is this you IRL?

Spacewolf
May 19, 2014
Because I'm dubious about an AI's capacity to consent.

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
Oh, you mean like AI AI, like if it's sentient or close to that. Yeah there's gonna be all kinds of ethical issues once AI gets that advanced.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Cicero posted:

Oh, you mean like AI AI, like if it's sentient or close to that. Yeah there's gonna be all kinds of ethical issues once AI gets that advanced.

I mean, there is going to be a lot of ethical issues but they are going to be more about stuff about how a human can deal with immortal beings with inhuman minds that can move body to body, clone and remerge their minds, have perfect memories, can rewrite or investigate any part of their own brain and do not even necessarily even have barriers between individuals or exist in any actual physical location.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Spacewolf posted:

Because I'm dubious about an AI's capacity to consent.

Well if we reach that kind of AI I’d be more concerned about Fuckotron 5001 not giving a poo poo about its owner’s consent before carrying out its primary function:2bong:

Better realdolls with improved moaning and mobility are a more likely outcome though.

Taffer
Oct 15, 2010


Spacewolf posted:

Because I'm dubious about an AI's capacity to consent.

We're not far from really sophisticated AI that can mimic a lot of human behavior and understanding. But we are very far from an AI that's actually self-aware. Like it's not even on the horizon, it's so complex that no one even knows what the prerequisites to it are. Our understanding of the human mind and our technological capabilities are still super far from that.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
It seems like there is 4real philosophical questions about how the heck crimes against nonhuman intelligence would even work especially if people were making AIs to order, but talking about it in terms of sex seems like there is no way that conversation wouldn't get gross and creepy super fast.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

Taffer posted:

We're not far from really sophisticated AI that can mimic a lot of human behavior and understanding. But we are very far from an AI that's actually self-aware. Like it's not even on the horizon, it's so complex that no one even knows what the prerequisites to it are. Our understanding of the human mind and our technological capabilities are still super far from that.

It would be better to describe it as, we have no idea how close we are to having a "really self-aware" AI. And because we don't have an idea of how close we are, it's almost certainly a) much farther than it might seem and b) likely to have little connection with various little things on the way.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
I am pretty sure we are never going to get a satisfactory answer on what self-awareness/consciousness/qualia even are or do and we are going to just have to exist in a world of more and more advanced machines that can complete more and more humanlike tasks without ever having any sort of magic moment when some vital force is definitively injected into a computer and it just wakes up as a dude.

ElCondemn
Aug 7, 2005


Owlofcreamcheese posted:

I am pretty sure we are never going to get a satisfactory answer on what self-awareness/consciousness/qualia even are or do and we are going to just have to exist in a world of more and more advanced machines that can complete more and more humanlike tasks without ever having any sort of magic moment when some vital force is definitively injected into a computer and it just wakes up as a dude.

I like the idea that consciousness is a spectrum, as the complexity increases so does consciousness. We all believe we're conscious but for all we know we're just doing what animals or computers do but at a higher level of complexity.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

ElCondemn posted:

I like the idea that consciousness is a spectrum, as the complexity increases so does consciousness. We all believe we're conscious but for all we know we're just doing what animals or computers do but at a higher level of complexity.

I feel like all the chinese room/p-zombie/blockhead type theory of mind arguments make a good job declaring you can't possibly ever determine if anything, including yourself are or aren't conscious and we probably just need to declare anything that acts like it is might as well be or that it doesn't matter or that nothing is, or some other "who knows" type answer where it's irrelevant to anything.

Freakazoid_
Jul 5, 2013


Buglord
Not surprised that OOCC considers sex to be gross.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Freakazoid_ posted:

Not surprised that OOCC considers sex to be gross.

Rape is gross, and conversations where people recreationally try to figure out made up fantasy scenarios to figure out if something is or isn't rape are always gross. And the conversation about the rights on a thing that isn't a human and can be designed any way you want is a good conversation but absolutely shouldn't be had in terms of when it is or isn't raping it.

Grammar-Bolshevik
Oct 12, 2017

fishmech posted:

It's especially funny because real dogs have legs that bend "backwards". Or is it humans whose legs bend backwards compared to dogs? Makes you think.

No matter which way the leg bends, it's a way you'll see among many animal species.

I'm chiming in to correct the record on dog legs, as I do time to time on these gay forums;

Dogs are digitigrades, their legs formation is similar to humans if we were to elevate our heels and walk on the fronts of our feet, they do not bend backwards.

Please continue discussing the coming synthetic dawn.

Grammar-Bolshevik
Oct 12, 2017

fishmech posted:

It would be better to describe it as, we have no idea how close we are to having a "really self-aware" AI. And because we don't have an idea of how close we are, it's almost certainly a) much farther than it might seem and b) likely to have little connection with various little things on the way.

I don't recall where, but iirc there were a series of publications predicting heavier than air flight was decades away months before the wright brothers flew their prototype.

Deep learning processes as they are now has very little direct or manageable human understanding, more a management of inputs an outputs on a neural matrix that operates very independently an gets broadly tweaked.

We consider consciousness to be an emergent quality of the brain, generally describing something as 'emergent' is often a way of saying 'we really don't understand this'.

We are still operating a bit blindly on even defining agi, but I wouldn't preclude the chance that it possibly near term or long term given the near global effort to develop it.

As it stands a functioning agi could have the potential to be of similar strategic importance as the atomic bomb in 1945; chances are there are factions in the world pushing forward on getting that going asap.

Regardless I just own a shitload of amazon, baba an google shares hoping it emerges there an then profit

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

Grammar-Bolshevik posted:

I don't recall where, but iirc there were a series of publications predicting heavier than air flight was decades away months before the wright brothers flew their prototype.

Deep learning processes as they are now has very little direct or manageable human understanding, more a management of inputs an outputs on a neural matrix that operates very independently an gets broadly tweaked.

"Deep learning" is likely to be one of these guys, who was also around just before the Wright Brothers flew:




Also uh, by the time the Wright Brothers were approaching success people knew we could make heavier than air craft that would maintain lift for extended times, knew ways to control them, and just hadn't cracked the proper power-to-weight ratio. It's a terrible comparison to building consciousness machines.

Grammar-Bolshevik
Oct 12, 2017

fishmech posted:

"Deep learning" is likely to be one of these guys, who was also around just before the Wright Brothers flew:




Also uh, by the time the Wright Brothers were approaching success people knew we could make heavier than air craft that would maintain lift for extended times, knew ways to control them, and just hadn't cracked the proper power-to-weight ratio. It's a terrible comparison to building consciousness machines.

someone somewhere posted:


"Heavier-than-air flying machines are impossible." - Lord Kelvin,
president, Royal Society, 1895.

"There is nothing new to be discovered in physics now. All that remains
is more and more precise measurement" - Lord Kelvin.

"Flight by machines heavier than air is unpractical and insignificant,
if not utterly impossible." - Simon Newcomb, 1902.

"Space travel is bunk" -Sir Harold Spencer Jones, Astronomer Royal of
Britain, 1957, two weeks before the launch of Sputnik

"Louis Pasteur's theory of germs is ridiculous fiction." - Pierre
Pachet, Professor of Physiology at Toulouse, 1872.

"The abdomen, the chest, and the brain will forever be shut from the
intrusion of the wise and humane surgeon." - Sir John Eric Ericksen,
British surgeon, appointed Surgeon-Extraordinary to Queen Victoria
1873.

"Such startling announcements as these should be deprecated as being
unworthy of science and mischievious to to its true progress" - Sir
William Siemens, 1880, on Edison's announcement of a sucessful light bulb.

"Fooling around with alternating current is just a waste of time. Nobody
will use it, ever." - Thomas Edison, 1889

"It is apparent to me that the possibilities of the aeroplane, which two
or three years ago were thought to hold the solution to the [flying
machine] problem, have been exhausted, and that we must turn elsewhere."
- Thomas Edison, 1895

"Airplanes are interesting toys but of no military value." - Marechal
Ferdinand Foch, Professor of Strategy, Ecole Superieure de Guerre.

"There is not the slightest indication that nuclear energy will ever be
obtainable. It would mean that the atom would have to be shattered at
will." -- Albert Einstein, 1932.

"Computers in the future may weigh no more than 1.5 tons." - Popular
Mechanics, forecasting the relentless march of science, 1949.

"I have traveled the length and breadth of this country and talked
with the best people, and I can assure you that data processing is a
fad that won't last out the year." - The editor in charge of business
books for Prentice Hall, 1957.

"There is practically no chance communications space satellites will be
used to provide better telephone, telegraph, television, or radio
service inside the Unided States." -T. Craven, FCC Commissioner, 1961.

"But what... is it good for?" - Engineer at the Advanced Computing
Systems Division of IBM, 1968, commenting on the microchip.

"There is no reason anyone would want a computer in their home." - Ken
Olson, president, chairman and founder of Digital Equipment Corp.,
1977.

"640K ought to be enough for anybody." - Bill Gates, 1981.


I'm not going to tell you when something is going to be invented, but you have a nice list of people smarter than you saying poo poo that didn't pan out.

The least you could do is just leave it an unknown, because that is what a good person should do, rather than be a salty weeb about it.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

Grammar-Bolshevik posted:

I'm not going to tell you when something is going to be invented, but you have a nice list of people smarter than you saying poo poo that didn't pan out.

The least you could do is just leave it an unknown, because that is what a good person should do, rather than be a salty weeb about it.

Cool list of quotes but they prove absolutely nothing. Additionally many were true, or were only meant for a limited horizon or the immediate situation during which they were true.

Consider that the Bill Gates quote is outright false - he never said it, and the best available sources on if it was said at all indicate that it was someone referring to the year it was being released, 1981. Incidentally in 1981, most personal computers being sold had well less than the 640 KB of potential contiguous memory the IBM PC would have. Consider: it was still impressive in 1982 that the Commodore 64 released with a whole 64 KB of RAM installed (~39k available to the user by default), and the then-current Apple II Plus models did not ship from Apple with more than 64 KB (and it was quite difficult to meaningfully use more than 64 KB on the architecture anyway, though later Apple II models could be expanded to several megabytes). The high end IBM PC models with 256 KB up to 512 KB installed in them were very advanced machines for the time, and indeed little was developed that had a problem with the 640 KB limit, largely due to few people and businesses having the full megabyte plus of RAM to have 640 KB contiguous conventional memory accessible.


Edit: And you should also consider that Edison was well aware alternating current was viable, but he had the patents on vital direct current poo poo so of course he's going to bag on the competition. So that one might as well be a quote of Pepsi saying Coke doesn't taste good or whatever. The DEC guy was not talking about personal computers, but rather the archetypal "smart home" computer that would control everything from the HVAC to cooking dinner etc. That poo poo still doesn't work right without very careful setup and attention despite another 40 years behind it.

fishmech fucked around with this message at 05:07 on May 22, 2018

Bar Ran Dun
Jan 22, 2006
They thought the same thing about the differential equations paradigm in systems. Ideological hubris is dangerous. Failure to recognize a new way of solving problems and thinking about solutions is also dangerous.

Grammar-Bolshevik
Oct 12, 2017

quote:

The experts that Grace and co coopted were academics and industry experts who gave papers at the International Conference on Machine Learning in July 2015 and the Neural Information Processing Systems conference in December 2015. These are two of the most important events for experts in artificial intelligence, so it’s a good bet that many of the world’s experts were on this list.

Grace and co asked them all—1,634 of them—to fill in a survey about when artificial intelligence would be better and cheaper than humans at a variety of tasks. Of these experts, 352 responded. Grave and co then calculated their median responses

The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027).

But many other tasks will take much longer for machines to master. AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of working as a surgeon until 2053.

The experts are far from infallible. They predicted that AI would be better than humans at Go by about 2027. (This was in 2015, remember.) In fact, Google’s DeepMind subsidiary has already developed an artificial intelligence capable of beating the best humans. That took two years rather than 12. It’s easy to think that this gives the lie to these predictions.

The experts go on to predict a 50 percent chance that AI will be better than humans at more or less everything in about 45 years.

https://www.technologyreview.com/s/607970/experts-predict-when-artificial-intelligence-will-exceed-human-performance/

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost

Spacewolf posted:

Because I'm dubious about an AI's capacity to consent.

Can an AI consent to being created in the first place?

What should the lifespan be of the first AI?

Trabisnikof
Dec 24, 2005

Mozi posted:

Can an AI consent to being created in the first place?

What should the lifespan be of the first AI?

SaTaMaS
Apr 18, 2003

BrandorKP posted:

They thought the same thing about the differential equations paradigm in systems. Ideological hubris is dangerous. Failure to recognize a new way of solving problems and thinking about solutions is also dangerous.

Any good sources to read on what the heck the "differential equations paradigm" is?

Bar Ran Dun
Jan 22, 2006

SaTaMaS posted:

Any good sources to read on what the heck the "differential equations paradigm" is?

Basically a couple of things all converged. Systems thinking evolves out of designing / modeling steam power plants and it starts heavy first law / second law thermo. Rocketry needs controls theory which uses differential equations to get solved. The controls theory and systems theory get combined. You can use the differential equations to solve the controls problems in the modeled steam system and make real analog automation for the steam plants with pneumatic and spring controlled valves. You then have a way of describing the systems (which have stocks and flows) and the feedback and control elements inside the systems (the controls part). Oh poo poo this type of modeling can be applied all over the loving place in society and business. It takes off in places like MIT. Anything with stocks and flows, feedback loops, time delays can now be described. Rand Corp ends up doing an assload of it. Hubris in thinking we can know, caused by the advances in Systems Dynamics contributes to cluster fucks like the Vietnam war. "Contributes" in a many of the same people, a direct relationship, sort of way.

Look for "System Dynamics"

Cicero
Dec 17, 2003

Jumpjet, melta, jumpjet. Repeat for ten minutes or until victory is assured.
Preliminary report for the Uber crash is out, here's an excerpt from the summary:

quote:

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).[2]  According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
https://www.ntsb.gov/investigations/AccidentReports/Pages/HWY18MH010-prelim.aspx

Spacewolf
May 19, 2014

NTSB report posted:

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

So the computer won't do it and won't tell the operator to do it...Hm! Interesting strategy Uber! So how is the operator supposed to know the computer doesn't have it under control?

...probably telepathy.

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?
Not arguing that the system isn't flawed, but I'm pretty sure the driver is supposed to be aware and ready to brake. They aren't supposed to wait for the computer to stop driving, they're supposed to act immediately during an emergency just as they would if they were driving normally.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Spacewolf posted:

So the computer won't do it and won't tell the operator to do it...Hm! Interesting strategy Uber! So how is the operator supposed to know the computer doesn't have it under control?

...probably telepathy.

It seems unclear if that is talking about the Uber self driving thing or the Volvo City Safety braking thing.

The volvo city safety emergency braking stuff is off because that would be legitimately crazy to have at the same time.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

Paradoxish posted:

Not arguing that the system isn't flawed, but I'm pretty sure the driver is supposed to be aware and ready to brake. They aren't supposed to wait for the computer to stop driving, they're supposed to act immediately during an emergency just as they would if they were driving normally.

The operator is supposed to be checking the self driving report console thing in the middle of the dash. That requires them to take their eyes off the road.

Raldikuk
Apr 7, 2006

I'm bad with money and I want that meatball!

Owlofcreamcheese posted:

It seems unclear if that is talking about the Uber self driving thing or the Volvo City Safety braking thing.

The volvo city safety emergency braking stuff is off because that would be legitimately crazy to have at the same time.

It isn't unclear at all from the quote. Read it again:

quote:

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

They even give their rationale for emergency braking maneuvers being disabled; to reduce erratic behavior. Or do you think the article is grossly misrepresenting what uber said? Cuz from the entire article there is no way to come to the conclusion that they might have meant that they disabled Volvo's equipment. Over and over again it says that the operator is the one who is supposed to do the braking.

*Edit: And why consistently give the benefit of the doubt to a lovely company that doesn't give a poo poo about safety in any respect of its business? Uber killed someone because of this and there's no reason to believe that there aren't many different areas where uber failed to do its due diligence to ensure its system was operating safely.

Raldikuk fucked around with this message at 17:05 on May 24, 2018

Adbot
ADBOT LOVES YOU

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Paradoxish posted:

Not arguing that the system isn't flawed, but I'm pretty sure the driver is supposed to be aware and ready to brake. They aren't supposed to wait for the computer to stop driving, they're supposed to act immediately during an emergency just as they would if they were driving normally.

Nobody wants this though. The whole point of a self driving car is that you can sleep, gently caress around on your phone or masturbate instead of driving. If you’re still required to constantly pay attention and need to have your hands and feet on the wheel and brakes at all times the convenience gain over a regular car is next to zero.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply