Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nenonen
Oct 22, 2009

Mulla on aina kolkyt donaa taskussa
Self-driving cars actually reduce deaths because they make pedestrians more cautious when crossing roads under conditions where a normal sober driver would notice them and avoid crash.

Adbot
ADBOT LOVES YOU

Turtlicious
Sep 17, 2012

by Jeffrey of YOSPOS
Why not make the car stop no matter what is in front of it???

Shinmera
Mar 25, 2013

I make games!

Because an unexpected full stop because some piece of paper flew by the cameras would also be a source of accidents.

The real solution is to eliminate all cars

Memento
Aug 25, 2009


Bleak Gremlin

klafbang posted:

Thanks for jumping to the nazi card. That really stresses I should take your comments seriously.

The fact that it's insanely relevant to the discussion and you just dismiss it with a flip remark speaks volumes about yours.

klafbang
Nov 18, 2009
Clapping Larry

Memento posted:

The fact that it's insanely relevant to the discussion and you just dismiss it with a flip remark speaks volumes about yours.

Dismissing the moral discussion as nazi poo poo because you don't like it is not resolving anything. At some point you have to have that discussion and people are not going to agree. At some point there will be losses, no amount of testing will prevent that 100%, and we should probably discuss how many are acceptable beforehand, because otherwise we get techbros beta testing on people. We are already there so dismissing the discussion is likely to just cause more harm.

I would be perfectly fine just getting rid of cars, self-driving or not but realize that not everybody agrees and I don't get to have things 100% my way. And I don't go around comparing people I disagree with or their opinions nazis because that is not engaging in a meaningful way.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Ok mengele.

Platystemon
Feb 13, 2012

BREADS
I started a thought with “Uber’s self driving car is more like Nazi ‘science’ than it is to…”

That’s where it ends because I cannot thing of a good example where innocent people had to be sacrificed for the greater good. It largely is a thing that exists only in the imaginations of philosophers and fascists.

That’s a problem for the whole concept.

The closest I can come to a good example is early aviation, but the risks there were mostly borne by consenting pioneers. Anywhere they weren’t it, it should be denounced—ærodromes near population centres, for example.

`Nemesis
Dec 30, 2000

railroad graffiti

Turtlicious posted:

Why not make the car stop no matter what is in front of it???

We have a very very long time of mixed operation vehicles having to interact with each other, and if you don't think people would abuse this, well I don't know what to say.

Memento posted:

The fact that it's insanely relevant to the discussion and you just dismiss it with a flip remark speaks volumes about yours.

Yes.

evil_bunnY
Apr 2, 2003

hellotoothpaste posted:

Current consumer-facing SDKs have no trouble tracking the multitude of objects in view in a driving scenario by unique ID regardless of if the classification changed. Not keeping a useful running tally of each object's classification history in that case is extremely batshit and dumb, among other things.

My loving camera from 6 years ago has autofocus that can deal with temporary subject occlusion, Uber's just reckless.

Memento
Aug 25, 2009


Bleak Gremlin

evil_bunnY posted:

Uber's just reckless.

This needs to be front and center in everyone's minds during this discussion. Uber is a company that has been sued so many times that those four links are just what I found on the first page of search results. They cut as many corners as they could in an effort to pump up their stock price before the IPO and by any metric it failed, because of their insanely shoddy business practices. The only reason they're as far as they are in the world of self-driving cars is because they stole technology from Google. They're the worst kind of corporate scum and deserve as much scorn as you want to heap on them.

zedprime
Jun 9, 2007

yospos

Platystemon posted:

I started a thought with “Uber’s self driving car is more like Nazi ‘science’ than it is to…”

That’s where it ends because I cannot thing of a good example where innocent people had to be sacrificed for the greater good. It largely is a thing that exists only in the imaginations of philosophers and fascists.

That’s a problem for the whole concept.

The closest I can come to a good example is early aviation, but the risks there were mostly borne by consenting pioneers. Anywhere they weren’t it, it should be denounced—ærodromes near population centres, for example.
Risk management is inseparable from the calculus of human life improvement. Farms, mines, and factories are operated with the idea that at least a few eggs are going to break to make the society better omelette. Even with state of the art protections there are a lot of jobs that are dangerous or have high chances of debilitation. Insurance companies sit around speculating how dangerous it is or isn't to make a profit off the idea.

But I think the relevant point here is even those dangerous jobs draw the line that the worker is compensated for and made aware of those risks and a lot of the gung ho self driving car folks lean toward the opposite where the rich person in the car should be protected over everything else on the road.

Collateral Damage
Jun 13, 2009

evil_bunnY posted:

Uber's just reckless.
Excuse me, the term is "disruptive".



:suicide:

StrangersInTheNight
Dec 31, 2007
ABSOLUTE FUCKING GUDGEON

evil_bunnY posted:

My loving camera from 6 years ago has autofocus that can deal with temporary subject occlusion, Uber's just reckless.

no, Uber is criminally negligent while pretending to simply be as benign as 'wreckless' to get away with doing as little as possibly. if you play dumb, people assume less of you and hold you responsible for less.

don't ever let 'oh that didn't occur to us' stand as an excuse from a corporation

Platystemon
Feb 13, 2012

BREADS

zedprime posted:

Risk management is inseparable from the calculus of human life improvement. Farms, mines, and factories are operated with the idea that at least a few eggs are going to break to make the society better omelette.

These things are productive industries and they have been from the start.

Uber’s cars weren’t taking the visually impaired to concerts or augmenting human drivers’ reaction times.

What Uber was doing is at best human experimentation, and to be less credulous, they risked lives to boost their stock with no intention of ever having a functional machine.

One experiment that isn’t on Wikipedia’s list is Operation Bongo II. The USAF subjected the people of Oklahoma City to regular sonic booms to see how they would handle it. Not well, it turns out. The experiment ended early and it threw the whole notion of supersonic transit into question.

I see some parallels there.

evil_bunnY
Apr 2, 2003

StrangersInTheNight posted:

no, Uber is criminally negligent while pretending to simply be as benign as 'wreckless' to get away with doing as little as possibly. if you play dumb,
Oh don't get me wrong, I 100% believe they're being reckless on purpose.

Nuclearmonkee
Jun 10, 2009


They don’t care there’s money to be made, maybe, someday

MononcQc
May 29, 2007

poster: Comparing uber with nazis is callous and we should really think about current sacrifices in self-driving cars to save future people
uber:

https://twitter.com/BNONews/status/1193698590641860608

MononcQc fucked around with this message at 14:06 on Nov 11, 2019

Admiral Joeslop
Jul 8, 2010




What the gently caress.

Nocheez
Sep 5, 2000

Can you spare a little cheddar?
Nap Ghost

Admiral Joeslop posted:

What the gently caress.

LifeSunDeath
Jan 4, 2007

still gay rights and smoke weed every day
Uber CEO: Hey, mistakes were made, and people got forgiven. Now I'm not saying Hitler was right, but....

Guyver
Dec 5, 2006

Wait you're saying someone who runs a large company is a complete psychopath?

...no it can't be

Guyver fucked around with this message at 14:42 on Nov 11, 2019

Spatial
Nov 15, 2007

Well, they replaced the last CEO because he was a walking PR disaster, so you'd think this guy wouldn't say poo poo like this. They can't help it though, they're talking from the heart and being the best person they can be

Nuclearmonkee
Jun 10, 2009


Their entire business model requires self driving car story time to be made real in order to ever turn a profit. The government isn't particularly interested in forcing these fuckers to be safe so they can just kill a few people in the pursuit of this goal with very limited repercussions.

All so we don't have to fix transit.

Nocheez
Sep 5, 2000

Can you spare a little cheddar?
Nap Ghost
There was a very dangerous intersection near where I live. I witnessed numerous close misses, until finally there was an accident where a young girl got t-boned because she misjudged the speed of an oncoming car. This was the Xth accident in Y-years which could have been prevented by a stoplight, so they finally installed.

Just :lol: if you think America is going to be proactive about self-driving cars. The code and laws which keeps us safe in the future will be written in blood today.

Icon Of Sin
Dec 26, 2008



Nocheez posted:

The code and laws which keeps us safe in the future will be written in blood today.

This sentence right here is the soul of OSHA in general, and this thread in particular.

Cichlidae
Aug 12, 2005

ME LOVE
MAKE RED LIGHT


Dr. Infant, MD
It seems rational that self-driving cars would be allowed once their crash and fatality rates were lower than human drivers, because at that point they'd be saving lives. And given that 97% of crashes are due at least in part to human error, that's a pretty low bar to clear (20 years past when the optimists say we'll be there).

Deteriorata
Feb 6, 2005

Cichlidae posted:

It seems rational that self-driving cars would be allowed once their crash and fatality rates were lower than human drivers, because at that point they'd be saving lives. And given that 97% of crashes are due at least in part to human error, that's a pretty low bar to clear (20 years past when the optimists say we'll be there).

Fundamentally, they're just a new consumer product. We can get along just fine without it until it reaches a level of safety we're comfortable with. That venture capitalists are willing to get a return on their investment through human sacrifice is irrelevant.

MononcQc
May 29, 2007

Cichlidae posted:

It seems rational that self-driving cars would be allowed once their crash and fatality rates were lower than human drivers, because at that point they'd be saving lives. And given that 97% of crashes are due at least in part to human error, that's a pretty low bar to clear (20 years past when the optimists say we'll be there).

If it were anything close to rational we'd all be taking the bus and other public transit means because it's already demonstrably safer than cars with far lower infrastructure costs once you consider urban sprawl, and the tech is available already.

That being said, in an investigation, you generally want to start at human error, you don't stop there.

PHIZ KALIFA
Dec 21, 2011

#mood
lol, just lol if you haven't figured out how to do the chain link bomb jump from Breath of the Wild

(I attempt to backflip off a tree stump and instead immediately pwn my neck)

Tiny Timbs
Sep 6, 2008

MononcQc posted:

poster: Comparing uber with nazis is callous and we should really think about current sacrifices in self-driving cars to save future people
uber:

https://twitter.com/BNONews/status/1193698590641860608

Imagine defending anything this company is doing

Phanatic
Mar 13, 2007

Please don't forget that I am an extremely racist idiot who also has terrible opinions about the Culture series.

Platystemon posted:

I started a thought with “Uber’s self driving car is more like Nazi ‘science’ than it is to…”

That’s where it ends because I cannot thing of a good example where innocent people had to be sacrificed for the greater good. It largely is a thing that exists only in the imaginations of philosophers and fascists.

Vaccinations.

No, no, I'm not an antivax nut, vaccination is correct and good and an enormous social benefit and parents who don't vaccinate their kids are morons. But there are deaths, and I think this is a good example. Notably the swine flu mass vaccination, when we were scared shitless of a repeat of the 1918 epidemic, and ended up giving a bunch of people Guillain-Barré syndrome and 50-odd people died as a result. Polio's another example; even setting aside the Cutter incident, the vaccine's been so successful at reducing polio that there are now more cases of vaccine-induced polio than there are of "wild" polio.

Uncle Enzo
Apr 28, 2008

I always wanted to be a Wizard

Phanatic posted:

Vaccinations.

No, no, I'm not an antivax nut, vaccination is correct and good and an enormous social benefit and parents who don't vaccinate their kids are morons. But there are deaths, and I think this is a good example. Notably the swine flu mass vaccination, when we were scared shitless of a repeat of the 1918 epidemic, and ended up giving a bunch of people Guillain-Barré syndrome and 50-odd people died as a result. Polio's another example; even setting aside the Cutter incident, the vaccine's been so successful at reducing polio that there are now more cases of vaccine-induced polio than there are of "wild" polio.

Vaccines offer a potential benefit to the people testing them. What benefit would have gone to the pedestrian Uber killed? Where is her signed consent form?

Sex Skeleton
Aug 16, 2018

For when lonely nights turn bonely

klafbang posted:

Thanks for jumping to the nazi card. That really stresses I should take your comments seriously.

I am not saying that the testing method is acceptable and that Uber should not be punished. I am saying that the error is not as obvious as everybody would make it seem. It is very easy to point afterwards and saying that people should have caught it. Even if off-the-shelf APIs can follow objects, that does not imply that this also works with feature interactions that have conflicting goals.

This isn't something you can QA into a product. It would have required very careful and tedious analysis of the software design and implementation. Which is stuff that Agile developers don't really do as a rule. But on safety-critical systems it's negligent not to:

1.) Identify all of the paths in your code that can potentially lead to a hazardous condition due to logic errors.
2.) Identify all data that could lead to a hazardous condition if it is out of range or sensed or calculated incorrectly.
3.) Identify hazardous situations that your software could find itself creating, and work backwards through the logic in your code base to determine whether those hazardous situations could concievably occur.

Frankly there is no evidence that Uber ever did this. And the problem with using neural nets and AI to classify objects is that there is no concievable way to establish that those systems function reliably or correctly. Any experienced engineer of a safety-critical software system would have recognized the serious risks involved in feeding the output data from your impossible-to-verify AI classification system into your object detection and object tracking routines.

You cannot test safety or quality back into a piece of software if it was never developed to be safe software in the first place. By the time it makes it to QA if the architecture and design of the software does not adhere to quality or safety standards, no amount of testing will change the architecture or design to be of sufficient quality.

And don't get me started on how negligent it is to hire a safety driver but not provide any methods of ensuring they're paying attention to the road the entire time. Even putting a little "object classification checker" minigame on a tablet or HUD would have been better than what Uber did, which was nothing. The rail industry has had this figured out for decades, and it's also pretty loving negligent to not hire anyone who has experience designing those kinds of systems to design a system to keep the safety driver involved in the monitoring process.

MononcQc
May 29, 2007

Sex Skeleton posted:

This isn't something you can QA into a product. It would have required very careful and tedious analysis of the software design and implementation. Which is stuff that Agile developers don't really do as a rule. But on safety-critical systems it's negligent not to:

1.) Identify all of the paths in your code that can potentially lead to a hazardous condition due to logic errors.
2.) Identify all data that could lead to a hazardous condition if it is out of range or sensed or calculated incorrectly.
3.) Identify hazardous situations that your software could find itself creating, and work backwards through the logic in your code base to determine whether those hazardous situations could concievably occur.

Frankly there is no evidence that Uber ever did this. And the problem with using neural nets and AI to classify objects is that there is no concievable way to establish that those systems function reliably or correctly. Any experienced engineer of a safety-critical software system would have recognized the serious risks involved in feeding the output data from your impossible-to-verify AI classification system into your object detection and object tracking routines.

You cannot test safety or quality back into a piece of software if it was never developed to be safe software in the first place. By the time it makes it to QA if the architecture and design of the software does not adhere to quality or safety standards, no amount of testing will change the architecture or design to be of sufficient quality.

And don't get me started on how negligent it is to hire a safety driver but not provide any methods of ensuring they're paying attention to the road the entire time. Even putting a little "object classification checker" minigame on a tablet or HUD would have been better than what Uber did, which was nothing. The rail industry has had this figured out for decades, and it's also pretty loving negligent to not hire anyone who has experience designing those kinds of systems to design a system to keep the safety driver involved in the monitoring process.

Here's how silicon valley operates:

https://www.theverge.com/2018/3/20/17144090/uber-car-accident-arizona-safety-anthony-levandowski-waymo

quote:

New York Magazine once attributed Levandowski as saying, “I’m pissed we didn’t have the first death,” to a group of Uber engineers after a driver died in a Tesla on autopilot in 2016. (Levandowski has denied ever saying it).

“The team is not moving fast enough due to a combination of risk aversion and lack of urgency, we need to move faster,” Levandowski told Page in another communication that was shown during the Waymo trial.

His messages to Travis Kalanick were more casual. “We need to think through the strategy, to take all the shortcuts we can find,” he said in one text message. And in another, “I just see this as a race and we need to win, second place is first looser [sic].”

Kalanick was similarly breezy. “Burn the village,” he texted Levandowski at one point.

“Yup,” Levandowski replied, within seconds.

oohhboy
Jun 8, 2013

by Jeffrey of YOSPOS
How the hell was Uber not found guilty??

wdarkk
Oct 26, 2007

Friends: Protected
World: Saved
Crablettes: Eaten

oohhboy posted:

How the hell was Uber not found guilty??

Texas.

Phanatic
Mar 13, 2007

Please don't forget that I am an extremely racist idiot who also has terrible opinions about the Culture series.

Uncle Enzo posted:

Vaccines offer a potential benefit to the people testing them. What benefit would have gone to the pedestrian Uber killed? Where is her signed consent form?

Not all the people who get vaccine-derived polio are people who consented to be vaccinated, because people who are vaccinated with OPV can shed live virus and infect others. Particularly immunocompromised people who themselves cannot be vaccinated. Obviously the small risk of this pales compared to the enormous benefit conferred by the vaccinations, but that's the 'greater good' that was mentioned.

haveblue
Aug 15, 2005



Toilet Rascal

Sex Skeleton posted:

This isn't something you can QA into a product. It would have required very careful and tedious analysis of the software design and implementation. Which is stuff that Agile developers don't really do as a rule. But on safety-critical systems it's negligent not to:

1.) Identify all of the paths in your code that can potentially lead to a hazardous condition due to logic errors.
2.) Identify all data that could lead to a hazardous condition if it is out of range or sensed or calculated incorrectly.
3.) Identify hazardous situations that your software could find itself creating, and work backwards through the logic in your code base to determine whether those hazardous situations could concievably occur.

Frankly there is no evidence that Uber ever did this. And the problem with using neural nets and AI to classify objects is that there is no concievable way to establish that those systems function reliably or correctly. Any experienced engineer of a safety-critical software system would have recognized the serious risks involved in feeding the output data from your impossible-to-verify AI classification system into your object detection and object tracking routines.

You cannot test safety or quality back into a piece of software if it was never developed to be safe software in the first place. By the time it makes it to QA if the architecture and design of the software does not adhere to quality or safety standards, no amount of testing will change the architecture or design to be of sufficient quality.

And don't get me started on how negligent it is to hire a safety driver but not provide any methods of ensuring they're paying attention to the road the entire time. Even putting a little "object classification checker" minigame on a tablet or HUD would have been better than what Uber did, which was nothing. The rail industry has had this figured out for decades, and it's also pretty loving negligent to not hire anyone who has experience designing those kinds of systems to design a system to keep the safety driver involved in the monitoring process.

For comparison, this is how to write perfect software when lives are on the line. It results in code that's pretty much entirely bug-free (not to mention on schedule and on budget) but it's utterly unlike commercial software development and no one trying to make a profit could work like that.

Booley
Apr 25, 2010
I CAN BARELY MAKE IT A WEEK WITHOUT ACTING LIKE AN ASSHOLE
Grimey Drawer

Phanatic posted:

Not all the people who get vaccine-derived polio are people who consented to be vaccinated, because people who are vaccinated with OPV can shed live virus and infect others. Particularly immunocompromised people who themselves cannot be vaccinated. Obviously the small risk of this pales compared to the enormous benefit conferred by the vaccinations, but that's the 'greater good' that was mentioned.

And this is exactly the same as being run over by a self driving Uber car programmed by idiots because:

Adbot
ADBOT LOVES YOU

Deteriorata
Feb 6, 2005

Phanatic posted:

Not all the people who get vaccine-derived polio are people who consented to be vaccinated, because people who are vaccinated with OPV can shed live virus and infect others. Particularly immunocompromised people who themselves cannot be vaccinated. Obviously the small risk of this pales compared to the enormous benefit conferred by the vaccinations, but that's the 'greater good' that was mentioned.

Nonetheless, that was a situation where the developers had made it as safe as they could, and an independent board declared the risk to be acceptable. Nothing ever has no risk associated with it.

It is not at all analogous to self-driving cars, where the developers' first priority is to start making money quickly and killing people is part of their R&D.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply