Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Pigsfeet on Rye posted:

Code poorly writ? You must acquit!

Jaywalking laws are what got them off the hook.

Remember, Auto Clubs pushed for jaywalking laws that largely blamed victims for their demise rather than drivers

Adbot
ADBOT LOVES YOU

hellotoothpaste
Dec 21, 2006

I dare you to call it a perm again..

Pyroclastic posted:

The article I read had a little more detail--it re-categorized the victim several times, but even after a re-categorization, it was still detecting an object in the road that it may have to brake for. The real issue wasn't 'didn't see a pedestrian because it wasn't in the crosswalk' (although if it was able to recognize a pedestrian outside of a crosswalk, maybe it wouldn't've gotten confused at what it was seeing), it was because every time it re-categorized her, it threw out its prior motion data, so the system literally didn't recognize that she was moving across its path. Every time the system changed what it thought it saw, she had a known starting speed of 0. It detected her as a bicycle several times, but the system still would have braked for a bicycle moving across its path at any point in a roadway, but the re-categorizations just meant it had near-static snapshots of a variety of obstacles that didn't have a vector that crossed the car's.

Holy poo poo, that is so absurd I didn't even consider it.

PHIZ KALIFA
Dec 21, 2011

#mood

pitting workers against each other is an OSHA hazard

Kazy
Oct 23, 2006

0x38: FLOPPY_INTERNAL_ERROR

HardDiskD posted:

And someone mentioned that Uber got acquitted, right? :psyduck:

Acquittal implies criminal charges were filed. They barely even considered holding Uber criminally liable.

drgitlin
Jul 25, 2003
luv 2 get custom titles from a forum that goes into revolt when its told to stop using a bad word.

evil_bunnY posted:

lmao that's criminally incompetent.

I doubt any of these systems have object permanence. I know Tesla’s “smart summon” feature doesn’t.

Spatial
Nov 15, 2007

Yes it is criminally negligent. Also it's completely and utterly bugfuck insane that anyone is allowed to test something like this on a public road. A giant multi-ton projectile with unknown behavioural characteristics with the public as the guinea pig. It's like testing a self-shooting gun in a public park.

Jokerpilled Drudge
Jan 27, 2010

by Pragmatica

Spatial posted:

Yes it is criminally negligent. Also it's completely and utterly bugfuck insane that anyone is allowed to test something like this on a public road. A giant multi-ton projectile with unknown behavioural characteristics with the public as the guinea pig. It's like testing a self-shooting gun in a public park.

and yet so many people, goons included, were breathless to tell us that this was going to be great. Meanwhile, anyone who had any experience with software or big companies like uber knew that we were in for some serious loving bullshit.

Pyroclastic
Jan 4, 2010

drgitlin posted:

I doubt any of these systems have object permanence. I know Tesla’s “smart summon” feature doesn’t.

Uber's system did.

Ars Technica posted:

Second, the constantly switching classifications prevented Uber's software from accurately computing her trajectory and realizing she was on a collision course with the vehicle. You might think that if a self-driving system sees an object moving into the path of the vehicle, it would put on its brakes even if it wasn't sure what kind of object it was. But that's not how Uber's software worked.
The system used an object's previously observed locations to help compute its speed and predict its future path. However, "if the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories," the NTSB reports.

And, yes, Uber was cleared of criminal liability almost immediately, well before the NTSB finished its investigation (maybe the feds could file their own charges?). Uber's just fine with throwing their 'safety' driver under the bus. Or under the Uber.

haveblue
Aug 15, 2005



Toilet Rascal

Spatial posted:

Yes it is criminally negligent. Also it's completely and utterly bugfuck insane that anyone is allowed to test something like this on a public road. A giant multi-ton projectile with unknown behavioural characteristics with the public as the guinea pig. It's like testing a self-shooting gun in a public park.

ED-209, except with random members of the public instead of executives

LifeSunDeath
Jan 4, 2007

still gay rights and smoke weed every day
https://i.imgur.com/Cfwhj6N.mp4

Somebody fucked around with this message at 00:23 on Nov 11, 2019

Baronjutter
Dec 31, 2007

"Tiny Trains"

I can't imagine that guy is ok...

corgski
Feb 6, 2007

Silly goose, you're here forever.

That is definitely the "instantaneously vaporized" sort of arc flash, yes.

Cojawfee
May 31, 2006
I think the US is dumb for not using Celsius
if that guy died, you should probably remove that one.

Nenonen
Oct 22, 2009

Mulla on aina kolkyt donaa taskussa
It's the adjacent electric box that seems to explode to me?

Burt Sexual
Jan 26, 2006

by Jeffrey of YOSPOS
Switchblade Switcharoo
Please link and nms those ty. I’m mobile

Jokerpilled Drudge
Jan 27, 2010

by Pragmatica
Video make me real scared

drgitlin
Jul 25, 2003
luv 2 get custom titles from a forum that goes into revolt when its told to stop using a bad word.

Pyroclastic posted:

Uber's system did.


That quote says it didn’t, because changing classification made it forget it.

Burt Sexual
Jan 26, 2006

by Jeffrey of YOSPOS
Switchblade Switcharoo

Enkmar posted:

Video make me real scared

It’s in the op you idiot.

C.M. Kruger
Oct 28, 2013
car: *hurtles towards pedestrian*
AI: *screams and covers face with hands*

angryrobots
Mar 31, 2005

corgski posted:

That is definitely the "instantaneously vaporized" sort of arc flash, yes.

It really depends on how long he was exposed, and how well his PPE protected him. If the explosion was was at chest level or above (or not located in the panel closest to him), he probably had a pretty decent shot at minor to no burns, assuming he moved away immediately. He was in the open, not an enclosed space, and appeared to have everything covered... Hopefully with material that was FR and did not continue the burn after the accident.

Worst case would be a close explosion that was below chest level, because as the heat rolls up it would get behind the face shield, or an untucked shirt.

This poo poo right here is why I harp on equipment grounding and fault protection in the wiring thread. There is a stupid amount of fault current potential at the utility level, even at residential voltage in many situations. You want that mess to be outside and enclosed, with anything coming inside protected to limit the fault potential.

glynnenstein
Feb 18, 2014


If you want to spin up cutting discs to twice their rated speed you need to upgrade from standard safety glasses.

(only watermelons were harmed in this video by the Hydraulic Press Channel people)

https://www.youtube.com/watch?v=4ttaUA4y1c4

Sagebrush
Feb 26, 2012

Pyroclastic posted:

The system used an object's previously observed locations to help compute its speed and predict its future path. However, "if the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories," the NTSB reports.

I get the batshit logic that they were using here -- if you've calculated, say, the motion path for a pedestrian and then you decide that the object is actually a bicycle, you need to update your predictions because a bicycle can move faster and in different ways than a pedestrian does.

However, just throwing out the entire prediction and starting from scratch is so loving dumb. That's like first-year computer science student level work. In many ways it doesn't matter what the object is; if it's moving at 3 meters per second into the path of the car, you can reasonably anticipate that it will continue to move in a similar fashion in the near future, and any changes in its motion will be restricted to a narrow range of vectors and accelerations that are available to earthbound objects obeying the laws of physics. You can keep the model and be aware that something is about to intersect with the car even if you're not completely sure of what it is.

Green Intern
Dec 29, 2008

Loon, Crazy and Laughable

The car has to calculate how much of an insurance liability it will be for Uber for every scenario before it will stop. It starts at 0 every time.

Dick Burglar
Mar 6, 2006

Green Intern posted:

The car has to calculate how much of an insurance liability it will be for Uber for every scenario before it will stop. It starts at 0 every time.

Apparently it ends at 0 every time, too.

corgski
Feb 6, 2007

Silly goose, you're here forever.

angryrobots posted:

This poo poo right here is why I harp on equipment grounding and fault protection in the wiring thread. There is a stupid amount of fault current potential at the utility level, even at residential voltage in many situations. You want that mess to be outside and enclosed, with anything coming inside protected to limit the fault potential.

Seriously. The worst I've personally been witness to was some clown using a Class II multimeter to directly meter a 250kVA generator outputting (what was supposed to be) 208v. In that case a decent portion of the meter vaporized.

klafbang
Nov 18, 2009
Clapping Larry

Sagebrush posted:

I get the batshit logic that they were using here -- if you've calculated, say, the motion path for a pedestrian and then you decide that the object is actually a bicycle, you need to update your predictions because a bicycle can move faster and in different ways than a pedestrian does.

However, just throwing out the entire prediction and starting from scratch is so loving dumb. That's like first-year computer science student level work. In many ways it doesn't matter what the object is; if it's moving at 3 meters per second into the path of the car, you can reasonably anticipate that it will continue to move in a similar fashion in the near future, and any changes in its motion will be restricted to a narrow range of vectors and accelerations that are available to earthbound objects obeying the laws of physics. You can keep the model and be aware that something is about to intersect with the car even if you're not completely sure of what it is.

It is not batshit or dumb at all. It is easy to point and laugh afterwards, but it is very easy to conceive of scenarios that would lead to such implementation.

The object permanence has to compete against objects disappearing (maybe only from view) and appearing. In the real world, objects don’t just change, so it is quite possible the code considered it a new object instead of flakey object detection (which is again explainable as the correct answer was excluded here).

Everybody would also have called it stupid if a Tesla hit a pedestrian because a car went past it, so it inherited the speed of the car and was hit because it was too darn slow. This is of course a ridiculous example as a Tesla would never have object permanence.

I’m not defending Uber, but saying that the problem is a lot more complex than people give it credit for, and insights like this that are obvious in hindsight are not beforehand. They should have caught it in testing, but that is literally what they did. People (well, more people than have already) are going to die before the technology gets good enough to be used safely.

That raises the question about whether it is worth it? I dunno, maybe? It turns out that the trolley problem we have to deal with is not whether a murdercar should kill 10 hobos to save one businessman, but instead whether incomplete pieces of unpredictable heaps of mess should kill people now in order to potentially save people in 20+ years when the technology is good enough.

hellotoothpaste
Dec 21, 2006

I dare you to call it a perm again..

klafbang posted:

It is not batshit or dumb at all. It is easy to point and laugh afterwards, but it is very easy to conceive of scenarios that would lead to such implementation.

The object permanence has to compete against objects disappearing (maybe only from view) and appearing. In the real world, objects don’t just change, so it is quite possible the code considered it a new object instead of flakey object detection (which is again explainable as the correct answer was excluded here).

Current consumer-facing SDKs have no trouble tracking the multitude of objects in view in a driving scenario by unique ID regardless of if the classification changed. Not keeping a useful running tally of each object's classification history in that case is extremely batshit and dumb, among other things.

There are lines of research around advanced behaviors like maintaining tracking even if an object deforms in view. That sounds like the sort of thing that should be legally required to be implemented and validated before any of these things hit the road with inattentive/low paid 'safety' drivers as fallbacks.

Sagebrush
Feb 26, 2012

klafbang posted:

insights like this that are obvious in hindsight are not beforehand. They should have caught it in testing, but that is literally what they did. People (well, more people than have already) are going to die before the technology gets good enough to be used safely.

Ow Oof this take is so loving hot

If the system is anticipated to kill innocent people in its testing, then you find a better loving way to test it or you cancel the research. We don't conduct experiments like that any more. We don't involve unwilling participants in scientific research at all, let alone research that poses a risk to their life. That's literal Nazi poo poo.

klafbang posted:

whether incomplete pieces of unpredictable heaps of mess should kill people now in order to potentially save people in 20+ years when the technology is good enough.

No. They should not. End of debate.

There is no reason that anyone should die in the development of self-driving cars. Yes, the process may take longer and cost more if we have to use motorized crash test dummies on a closed course instead of letting half-baked CS 102 projects loose on public streets, but that is the only ethical way to approach the problem. Uber's project only made it onto the streets because of greed and stupidity and it's a goddamn travesty that no one is in jail for this woman's murder.

E: like I just want you to really grasp that the trolley problem is not applicable to this situation or to the real world in general. We have decided as a society that it is never acceptable to violate a human being's right to safety and life because doing so might save more people down the line. You can't even make someone donate their blood or bone marrow to save someone else's life at little risk to their own. Saying "well, some people will be killed by our experiments, but maybe in a couple of decades we will have improved outcomes somewhat so it balances out" is just incredibly callous and, again, literally the kind of justifications used by Nazi doctors.

Sagebrush fucked around with this message at 08:51 on Nov 11, 2019

Kibayasu
Mar 28, 2010

klafbang posted:

It is not batshit or dumb at all. It is easy to point and laugh afterwards, but it is very easy to conceive of scenarios that would lead to such implementation.

The object permanence has to compete against objects disappearing (maybe only from view) and appearing. In the real world, objects don’t just change, so it is quite possible the code considered it a new object instead of flakey object detection (which is again explainable as the correct answer was excluded here).

Everybody would also have called it stupid if a Tesla hit a pedestrian because a car went past it, so it inherited the speed of the car and was hit because it was too darn slow. This is of course a ridiculous example as a Tesla would never have object permanence.

I’m not defending Uber, but saying that the problem is a lot more complex than people give it credit for, and insights like this that are obvious in hindsight are not beforehand. They should have caught it in testing, but that is literally what they did. People (well, more people than have already) are going to die before the technology gets good enough to be used safely.

That raises the question about whether it is worth it? I dunno, maybe? It turns out that the trolley problem we have to deal with is not whether a murdercar should kill 10 hobos to save one businessman, but instead whether incomplete pieces of unpredictable heaps of mess should kill people now in order to potentially save people in 20+ years when the technology is good enough.

Apparently there are still people in the world so very willing to sacrifice other unwilling people they don’t personally know on the altar of science so they can maybe enjoy some kind of unknown benefit at some indeterminate point in the future.

Mustached Demon
Nov 12, 2016

Techbros view safety the same way they do quality: expect errors so just patch them out. Like it's acceptable to have buggy launch where people die so long as it gets fixed.

Platystemon
Feb 13, 2012

BREADS
Maybe if the trolley runs over enough people it will become enlightened and stop running over people.

C.M. Kruger
Oct 28, 2013

Platystemon posted:

Maybe if the trolley runs over enough people it will become enlightened and stop running over people.

What if we connect the trolley to the internet? with a app?

Bloody Hedgehog
Dec 12, 2003

💥💥🤯💥💥
Gotta nuke something

Sagebrush posted:

We don't conduct experiments like that any more. We don't involve unwilling participants in scientific research at all, let alone research that poses a risk to their life. That's literal Nazi poo poo.

Are we really not? There's no real way to say. The US was still doing that poo poo in secret up until the 1960's, that we know of.

We live in hellworld, I wouldn't put anything past governments/big business.

`Nemesis
Dec 30, 2000

railroad graffiti

Sagebrush posted:

Ow Oof this take is so loving hot

If the system is anticipated to kill innocent people in its testing, then you find a better loving way to test it or you cancel the research. We don't conduct experiments like that any more. We don't involve unwilling participants in scientific research at all, let alone research that poses a risk to their life. That's literal Nazi poo poo.


No. They should not. End of debate.

There is no reason that anyone should die in the development of self-driving cars. Yes, the process may take longer and cost more if we have to use motorized crash test dummies on a closed course instead of letting half-baked CS 102 projects loose on public streets, but that is the only ethical way to approach the problem. Uber's project only made it onto the streets because of greed and stupidity and it's a goddamn travesty that no one is in jail for this woman's murder.

E: like I just want you to really grasp that the trolley problem is not applicable to this situation or to the real world in general. We have decided as a society that it is never acceptable to violate a human being's right to safety and life because doing so might save more people down the line. You can't even make someone donate their blood or bone marrow to save someone else's life at little risk to their own. Saying "well, some people will be killed by our experiments, but maybe in a couple of decades we will have improved outcomes somewhat so it balances out" is just incredibly callous and, again, literally the kind of justifications used by Nazi doctors.

I really appreciate this post, and it needs to be quoted in full again and again and again.

The Lone Badger
Sep 24, 2007

C.M. Kruger posted:

What if we connect the trolley to the internet? with a app?

And people can place money in the app to increase the 'cost' the trolley applies to running them over! Genius!

Mustached Demon
Nov 12, 2016

Platystemon posted:

Maybe if the trolley runs over enough people it will become enlightened and stop running over people.

Or maybe it becomes so enlightened it actively seeks out humans to run over for the betterment of the planet?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Bloody Hedgehog posted:

Are we really not? There's no real way to say. The US was still doing that poo poo in secret up until the 1960's, that we know of.

We live in hellworld, I wouldn't put anything past governments/big business.

The reason it was happening in secret is that we as a society generally agree that it's completely and totally un-loving-acceptable. Undoubtedly there are sociopaths still doing that sort of thing in various dark corners of the world, and also in silicon valley - but when such things come to light, it's the responsibility of anyone with any sort of moral center to stand up and say that yes, this is still and has always been completely unacceptable.

Don't make excuses for murderous sociopaths.

klafbang
Nov 18, 2009
Clapping Larry

Sagebrush posted:

Ow Oof this take is so loving hot

If the system is anticipated to kill innocent people in its testing, then you find a better loving way to test it or you cancel the research. We don't conduct experiments like that any more. We don't involve unwilling participants in scientific research at all, let alone research that poses a risk to their life. That's literal Nazi poo poo.


No. They should not. End of debate.

There is no reason that anyone should die in the development of self-driving cars. Yes, the process may take longer and cost more if we have to use motorized crash test dummies on a closed course instead of letting half-baked CS 102 projects loose on public streets, but that is the only ethical way to approach the problem. Uber's project only made it onto the streets because of greed and stupidity and it's a goddamn travesty that no one is in jail for this woman's murder.

E: like I just want you to really grasp that the trolley problem is not applicable to this situation or to the real world in general. We have decided as a society that it is never acceptable to violate a human being's right to safety and life because doing so might save more people down the line. You can't even make someone donate their blood or bone marrow to save someone else's life at little risk to their own. Saying "well, some people will be killed by our experiments, but maybe in a couple of decades we will have improved outcomes somewhat so it balances out" is just incredibly callous and, again, literally the kind of justifications used by Nazi doctors.

Thanks for jumping to the nazi card. That really stresses I should take your comments seriously.

I am not saying that the testing method is acceptable and that Uber should not be punished. I am saying that the error is not as obvious as everybody would make it seem. It is very easy to point afterwards and saying that people should have caught it. Even if off-the-shelf APIs can follow objects, that does not imply that this also works with feature interactions that have conflicting goals.

You can catch some interactions in testing, but you can never catch all. Uber should probably have caught this one. And the next 10 that will also kill people. And Tesla should be forcibly dissolved for keeping playing crash test dummies with real people and their insane auto pilot. But even after this and decades of testing there will still be issues. At some point it will be put to the test on real people; it should not have happened yet, but no matter how much it is tested, at some point people are killed for this idea.

The reference to the trolley problem is not an attempt at a hot take, but more a reference to the same nerd-wankery that leads to people talking about whether to offer free wifi and the color of the interior in Elon's deathtubes (not the child tube, the hypertube) they should probably talk about making giant vacuum tubes work in the first place. Similarly, the discussion about who to kill in an accident, which is a discussion that needs to be had, is second to the very real and current discussion about whether to kill people for making a technology that could potentially be good. That is also a discussion that needs to be had; right now, it should probably forbid any real-world testing (maybe Waymo can be allowed, they seem relatively careful). But what in 5 years? 10? 20? I don't claim to know the answer and am happy I don't have to make the decision (I would just outlaw all cars period, but realize that not everybody can get where they need by bike/public transport).

Bloody Hedgehog
Dec 12, 2003

💥💥🤯💥💥
Gotta nuke something

Jabor posted:

... in various dark corners of the world, and also in silicon valley

But you repeat yourself.

Adbot
ADBOT LOVES YOU

Platystemon
Feb 13, 2012

BREADS

klafbang posted:

I am saying that the error is not as obvious as everybody would make it seem.

Hard disagree.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply