|
Self-driving cars actually reduce deaths because they make pedestrians more cautious when crossing roads under conditions where a normal sober driver would notice them and avoid crash.
|
# ? Nov 11, 2019 09:48 |
|
|
# ? May 14, 2024 03:00 |
|
Why not make the car stop no matter what is in front of it???
|
# ? Nov 11, 2019 09:50 |
|
Because an unexpected full stop because some piece of paper flew by the cameras would also be a source of accidents. The real solution is to eliminate all cars
|
# ? Nov 11, 2019 09:54 |
|
klafbang posted:Thanks for jumping to the nazi card. That really stresses I should take your comments seriously. The fact that it's insanely relevant to the discussion and you just dismiss it with a flip remark speaks volumes about yours.
|
# ? Nov 11, 2019 10:23 |
|
Memento posted:The fact that it's insanely relevant to the discussion and you just dismiss it with a flip remark speaks volumes about yours. Dismissing the moral discussion as nazi poo poo because you don't like it is not resolving anything. At some point you have to have that discussion and people are not going to agree. At some point there will be losses, no amount of testing will prevent that 100%, and we should probably discuss how many are acceptable beforehand, because otherwise we get techbros beta testing on people. We are already there so dismissing the discussion is likely to just cause more harm. I would be perfectly fine just getting rid of cars, self-driving or not but realize that not everybody agrees and I don't get to have things 100% my way. And I don't go around comparing people I disagree with or their opinions nazis because that is not engaging in a meaningful way.
|
# ? Nov 11, 2019 10:37 |
|
Ok mengele.
|
# ? Nov 11, 2019 10:38 |
|
I started a thought with “Uber’s self driving car is more like Nazi ‘science’ than it is to…” That’s where it ends because I cannot thing of a good example where innocent people had to be sacrificed for the greater good. It largely is a thing that exists only in the imaginations of philosophers and fascists. That’s a problem for the whole concept. The closest I can come to a good example is early aviation, but the risks there were mostly borne by consenting pioneers. Anywhere they weren’t it, it should be denounced—ærodromes near population centres, for example.
|
# ? Nov 11, 2019 10:45 |
|
Turtlicious posted:Why not make the car stop no matter what is in front of it??? We have a very very long time of mixed operation vehicles having to interact with each other, and if you don't think people would abuse this, well I don't know what to say. Memento posted:The fact that it's insanely relevant to the discussion and you just dismiss it with a flip remark speaks volumes about yours. Yes.
|
# ? Nov 11, 2019 11:10 |
|
hellotoothpaste posted:Current consumer-facing SDKs have no trouble tracking the multitude of objects in view in a driving scenario by unique ID regardless of if the classification changed. Not keeping a useful running tally of each object's classification history in that case is extremely batshit and dumb, among other things.
|
# ? Nov 11, 2019 11:54 |
|
evil_bunnY posted:Uber's just reckless. This needs to be front and center in everyone's minds during this discussion. Uber is a company that has been sued so many times that those four links are just what I found on the first page of search results. They cut as many corners as they could in an effort to pump up their stock price before the IPO and by any metric it failed, because of their insanely shoddy business practices. The only reason they're as far as they are in the world of self-driving cars is because they stole technology from Google. They're the worst kind of corporate scum and deserve as much scorn as you want to heap on them.
|
# ? Nov 11, 2019 12:02 |
|
Platystemon posted:I started a thought with “Uber’s self driving car is more like Nazi ‘science’ than it is to…” But I think the relevant point here is even those dangerous jobs draw the line that the worker is compensated for and made aware of those risks and a lot of the gung ho self driving car folks lean toward the opposite where the rich person in the car should be protected over everything else on the road.
|
# ? Nov 11, 2019 12:08 |
|
evil_bunnY posted:Uber's just reckless.
|
# ? Nov 11, 2019 12:16 |
|
evil_bunnY posted:My loving camera from 6 years ago has autofocus that can deal with temporary subject occlusion, Uber's just reckless. no, Uber is criminally negligent while pretending to simply be as benign as 'wreckless' to get away with doing as little as possibly. if you play dumb, people assume less of you and hold you responsible for less. don't ever let 'oh that didn't occur to us' stand as an excuse from a corporation
|
# ? Nov 11, 2019 12:34 |
|
zedprime posted:Risk management is inseparable from the calculus of human life improvement. Farms, mines, and factories are operated with the idea that at least a few eggs are going to break to make the society better omelette. These things are productive industries and they have been from the start. Uber’s cars weren’t taking the visually impaired to concerts or augmenting human drivers’ reaction times. What Uber was doing is at best human experimentation, and to be less credulous, they risked lives to boost their stock with no intention of ever having a functional machine. One experiment that isn’t on Wikipedia’s list is Operation Bongo II. The USAF subjected the people of Oklahoma City to regular sonic booms to see how they would handle it. Not well, it turns out. The experiment ended early and it threw the whole notion of supersonic transit into question. I see some parallels there.
|
# ? Nov 11, 2019 12:45 |
|
StrangersInTheNight posted:no, Uber is criminally negligent while pretending to simply be as benign as 'wreckless' to get away with doing as little as possibly. if you play dumb,
|
# ? Nov 11, 2019 13:05 |
They don’t care there’s money to be made, maybe, someday
|
|
# ? Nov 11, 2019 13:06 |
|
poster: Comparing uber with nazis is callous and we should really think about current sacrifices in self-driving cars to save future people uber: https://twitter.com/BNONews/status/1193698590641860608 MononcQc fucked around with this message at 14:06 on Nov 11, 2019 |
# ? Nov 11, 2019 14:02 |
What the gently caress.
|
|
# ? Nov 11, 2019 14:04 |
|
Admiral Joeslop posted:What the gently caress.
|
# ? Nov 11, 2019 14:26 |
|
Uber CEO: Hey, mistakes were made, and people got forgiven. Now I'm not saying Hitler was right, but....
|
# ? Nov 11, 2019 14:33 |
|
Wait you're saying someone who runs a large company is a complete psychopath? ...no it can't be Guyver fucked around with this message at 14:42 on Nov 11, 2019 |
# ? Nov 11, 2019 14:38 |
|
Well, they replaced the last CEO because he was a walking PR disaster, so you'd think this guy wouldn't say poo poo like this. They can't help it though, they're talking from the heart and being the best person they can be
|
# ? Nov 11, 2019 15:37 |
Their entire business model requires self driving car story time to be made real in order to ever turn a profit. The government isn't particularly interested in forcing these fuckers to be safe so they can just kill a few people in the pursuit of this goal with very limited repercussions. All so we don't have to fix transit.
|
|
# ? Nov 11, 2019 15:45 |
|
There was a very dangerous intersection near where I live. I witnessed numerous close misses, until finally there was an accident where a young girl got t-boned because she misjudged the speed of an oncoming car. This was the Xth accident in Y-years which could have been prevented by a stoplight, so they finally installed. Just if you think America is going to be proactive about self-driving cars. The code and laws which keeps us safe in the future will be written in blood today.
|
# ? Nov 11, 2019 15:50 |
|
Nocheez posted:The code and laws which keeps us safe in the future will be written in blood today. This sentence right here is the soul of OSHA in general, and this thread in particular.
|
# ? Nov 11, 2019 15:56 |
|
It seems rational that self-driving cars would be allowed once their crash and fatality rates were lower than human drivers, because at that point they'd be saving lives. And given that 97% of crashes are due at least in part to human error, that's a pretty low bar to clear (20 years past when the optimists say we'll be there).
|
# ? Nov 11, 2019 16:38 |
|
Cichlidae posted:It seems rational that self-driving cars would be allowed once their crash and fatality rates were lower than human drivers, because at that point they'd be saving lives. And given that 97% of crashes are due at least in part to human error, that's a pretty low bar to clear (20 years past when the optimists say we'll be there). Fundamentally, they're just a new consumer product. We can get along just fine without it until it reaches a level of safety we're comfortable with. That venture capitalists are willing to get a return on their investment through human sacrifice is irrelevant.
|
# ? Nov 11, 2019 16:47 |
|
Cichlidae posted:It seems rational that self-driving cars would be allowed once their crash and fatality rates were lower than human drivers, because at that point they'd be saving lives. And given that 97% of crashes are due at least in part to human error, that's a pretty low bar to clear (20 years past when the optimists say we'll be there). If it were anything close to rational we'd all be taking the bus and other public transit means because it's already demonstrably safer than cars with far lower infrastructure costs once you consider urban sprawl, and the tech is available already. That being said, in an investigation, you generally want to start at human error, you don't stop there.
|
# ? Nov 11, 2019 16:47 |
|
lol, just lol if you haven't figured out how to do the chain link bomb jump from Breath of the Wild (I attempt to backflip off a tree stump and instead immediately pwn my neck)
|
# ? Nov 11, 2019 17:12 |
|
MononcQc posted:poster: Comparing uber with nazis is callous and we should really think about current sacrifices in self-driving cars to save future people Imagine defending anything this company is doing
|
# ? Nov 11, 2019 17:16 |
|
Platystemon posted:I started a thought with “Uber’s self driving car is more like Nazi ‘science’ than it is to…” Vaccinations. No, no, I'm not an antivax nut, vaccination is correct and good and an enormous social benefit and parents who don't vaccinate their kids are morons. But there are deaths, and I think this is a good example. Notably the swine flu mass vaccination, when we were scared shitless of a repeat of the 1918 epidemic, and ended up giving a bunch of people Guillain-Barré syndrome and 50-odd people died as a result. Polio's another example; even setting aside the Cutter incident, the vaccine's been so successful at reducing polio that there are now more cases of vaccine-induced polio than there are of "wild" polio.
|
# ? Nov 11, 2019 18:30 |
|
Phanatic posted:Vaccinations. Vaccines offer a potential benefit to the people testing them. What benefit would have gone to the pedestrian Uber killed? Where is her signed consent form?
|
# ? Nov 11, 2019 18:53 |
|
klafbang posted:Thanks for jumping to the nazi card. That really stresses I should take your comments seriously. This isn't something you can QA into a product. It would have required very careful and tedious analysis of the software design and implementation. Which is stuff that Agile developers don't really do as a rule. But on safety-critical systems it's negligent not to: 1.) Identify all of the paths in your code that can potentially lead to a hazardous condition due to logic errors. 2.) Identify all data that could lead to a hazardous condition if it is out of range or sensed or calculated incorrectly. 3.) Identify hazardous situations that your software could find itself creating, and work backwards through the logic in your code base to determine whether those hazardous situations could concievably occur. Frankly there is no evidence that Uber ever did this. And the problem with using neural nets and AI to classify objects is that there is no concievable way to establish that those systems function reliably or correctly. Any experienced engineer of a safety-critical software system would have recognized the serious risks involved in feeding the output data from your impossible-to-verify AI classification system into your object detection and object tracking routines. You cannot test safety or quality back into a piece of software if it was never developed to be safe software in the first place. By the time it makes it to QA if the architecture and design of the software does not adhere to quality or safety standards, no amount of testing will change the architecture or design to be of sufficient quality. And don't get me started on how negligent it is to hire a safety driver but not provide any methods of ensuring they're paying attention to the road the entire time. Even putting a little "object classification checker" minigame on a tablet or HUD would have been better than what Uber did, which was nothing. The rail industry has had this figured out for decades, and it's also pretty loving negligent to not hire anyone who has experience designing those kinds of systems to design a system to keep the safety driver involved in the monitoring process.
|
# ? Nov 11, 2019 18:54 |
|
Sex Skeleton posted:This isn't something you can QA into a product. It would have required very careful and tedious analysis of the software design and implementation. Which is stuff that Agile developers don't really do as a rule. But on safety-critical systems it's negligent not to: Here's how silicon valley operates: https://www.theverge.com/2018/3/20/17144090/uber-car-accident-arizona-safety-anthony-levandowski-waymo quote:New York Magazine once attributed Levandowski as saying, “I’m pissed we didn’t have the first death,” to a group of Uber engineers after a driver died in a Tesla on autopilot in 2016. (Levandowski has denied ever saying it).
|
# ? Nov 11, 2019 19:01 |
|
How the hell was Uber not found guilty??
|
# ? Nov 11, 2019 19:07 |
|
oohhboy posted:How the hell was Uber not found guilty?? Texas.
|
# ? Nov 11, 2019 19:10 |
|
Uncle Enzo posted:Vaccines offer a potential benefit to the people testing them. What benefit would have gone to the pedestrian Uber killed? Where is her signed consent form? Not all the people who get vaccine-derived polio are people who consented to be vaccinated, because people who are vaccinated with OPV can shed live virus and infect others. Particularly immunocompromised people who themselves cannot be vaccinated. Obviously the small risk of this pales compared to the enormous benefit conferred by the vaccinations, but that's the 'greater good' that was mentioned.
|
# ? Nov 11, 2019 19:13 |
|
Sex Skeleton posted:This isn't something you can QA into a product. It would have required very careful and tedious analysis of the software design and implementation. Which is stuff that Agile developers don't really do as a rule. But on safety-critical systems it's negligent not to: For comparison, this is how to write perfect software when lives are on the line. It results in code that's pretty much entirely bug-free (not to mention on schedule and on budget) but it's utterly unlike commercial software development and no one trying to make a profit could work like that.
|
# ? Nov 11, 2019 19:18 |
|
Phanatic posted:Not all the people who get vaccine-derived polio are people who consented to be vaccinated, because people who are vaccinated with OPV can shed live virus and infect others. Particularly immunocompromised people who themselves cannot be vaccinated. Obviously the small risk of this pales compared to the enormous benefit conferred by the vaccinations, but that's the 'greater good' that was mentioned. And this is exactly the same as being run over by a self driving Uber car programmed by idiots because:
|
# ? Nov 11, 2019 19:18 |
|
|
# ? May 14, 2024 03:00 |
|
Phanatic posted:Not all the people who get vaccine-derived polio are people who consented to be vaccinated, because people who are vaccinated with OPV can shed live virus and infect others. Particularly immunocompromised people who themselves cannot be vaccinated. Obviously the small risk of this pales compared to the enormous benefit conferred by the vaccinations, but that's the 'greater good' that was mentioned. Nonetheless, that was a situation where the developers had made it as safe as they could, and an independent board declared the risk to be acceptable. Nothing ever has no risk associated with it. It is not at all analogous to self-driving cars, where the developers' first priority is to start making money quickly and killing people is part of their R&D.
|
# ? Nov 11, 2019 19:19 |