Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


Yeah, but that doesn't tell us about autonomous technology as a whole. It surely seems Uber's tech was poo poo.

Adbot
ADBOT LOVES YOU

Ola
Jul 19, 2004

Waymo is still in the lab, isn't it? Or "lab", controlled road conditions. Tesla is on the road and hoboy it's not safer than humans.

Lots of cars come with good lane keepers now. While none of them, not even Tesla, are good at driving winding roads, being able to cruise along in a stable, straight line on the highway while not moving a muscle is very relaxing and makes it easier to be alert, if you're not an idiot. If that's all that comes out of this hype, fine.

susan b buffering
Nov 14, 2016

Anyone who’s serious about reducing fatalities from vehicles knows the solution is lower speed limits in the short term, and an overall reduction in driving in the long-term, not some technological solution that won’t see widespread adoption for decades if it ever does.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Jeb Bush 2012 posted:

the actual problem is that as-is autonomous vehicles are way more dangerous,


Jeb Bush 2012 posted:

self-driving cars killed their first person way before driving the average number of miles a human would take to kill someone https://www.washingtonpost.com/opin...fce8_story.html

As the title of that article says that it is too soon to tell...at least from a year and a half ago.

Your original post implies a level of confidence that seems misplaced, no?

flatluigi
Apr 23, 2008

here come the planes
autonomous cars might have a higher ceiling of safety than human piloted cars but they're absolutely not there yet overall

The Fool
Oct 16, 2003


Autonomous cars are only safer if they are the only vehicles on the road.

Jeb Bush 2012
Apr 4, 2007

A mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.

Thermopyle posted:

As the title of that article says that it is too soon to tell...at least from a year and a half ago.

Your original post implies a level of confidence that seems misplaced, no?

the title of the article is extremely generous (and was presumably written that way to appease AV fans who said "well they'll be safer some day!). the actual body of the article makes the case quite clear

but again, this is besides the point - AVs are not even being driven under conditions which would provide a fair test of their safety, because they're clearly not ready yet

tankadillo
Aug 15, 2006

https://twitter.com/DavidZipper/status/1111077290082009094

Since autonomous cars can’t solve their problems with code, their success is contingent on making police be more aggressive towards pedestrians instead.

The Fool
Oct 16, 2003


tankadillo posted:

https://twitter.com/DavidZipper/status/1111077290082009094

Since autonomous cars can’t solve their problems with code, their success is contingent on making police be more aggressive towards pedestrians instead.

That’s basically how automobiles got control of the roads in the first place

https://www.vox.com/2015/1/15/7551873/jaywalking-history

QuarkJets
Sep 8, 2008

tankadillo posted:

https://twitter.com/DavidZipper/status/1111077290082009094

Since autonomous cars can’t solve their problems with code, their success is contingent on making police be more aggressive towards pedestrians instead.

Holy poo poo, literally "I don't give a gently caress if a jaywalker dies"

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Silicon Valley pioneered self-driving cars. But some of its tech-savvy residents don’t want them tested in their neighborhoods.

quote:

Karen Brenchley is a computer scientist with expertise in training artificial intelligence, but this longtime Silicon Valley resident has pangs of anxiety whenever she sees Waymo self-driving cars maneuver the streets near her home.

The former product manager, who has worked for Microsoft and Hewlett-Packard, wonders how engineers could teach the robocars operating on her tree-lined streets to make snap decisions, speed and slow with the flow of traffic and yield to pedestrians coming from the nearby park. She has asked her husband, an award-winning science-fiction author who doesn’t drive, to wear a shiny vest while cycling to ensure autonomous vehicles spot him in a rush of activity.

The problem isn’t that she doesn’t understand the technology. It’s that she does, and she knows how flawed nascent technology can be.

Read the whole article. It's good.

comedyblissoption
Mar 15, 2006

skull mask mcgee posted:

Anyone who’s serious about reducing fatalities from vehicles knows the solution is lower speed limits in the short term, and an overall reduction in driving in the long-term, not some technological solution that won’t see widespread adoption for decades if it ever does.
we could have instant massive reduction in automobile fatalities if we retrofitted the existing infrastructure to just use massive amounts of buses, but obviously this isn't a reasonable short-term solution because our society is controlled by people who have too much of a stake in the status quo

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


A lot of Americans are anti-public transit because it benefits poor black people.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION

Volte posted:

You have to look at both sides of the equation. The question being posed is basically this: 'In order to greatly reduce your chance of being killed in a car accident, are you willing to increase (by a smaller amount) your chance of being killed some other way?' The fact that "some other way" is a different type of car accident in this situation is irrelevant because the point is that the distribution is different enough that it might as well be an unrelated event. I'm not even asserting that the answer to the question is no, just that it doesn't seem like anybody even asked.

Uh, no it's not irrelevant, you just want it to be because that's the bit that makes your argument stupid as gently caress.

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop

so you just buy the marketing premise that self-driving cars work as advertised as a given truth then, and the burden of proof is on us, instead of on them to prove that it's somehow NOT actually dumber than a four year old's understanding of space and movement and human behavior?

in the coding horrors thread no less

the stupid are cocksure while the intelligent are full of doubt

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION

Dumb Lowtax posted:

so you just buy the marketing premise that self-driving cars work as advertised as a given truth then, and the burden of proof is on us, instead of on them to prove that it's somehow NOT actually dumber than a four year old's understanding of space and movement and human behavior?

in the coding horrors thread no less

the stupid are cocksure while the intelligent are full of doubt

He's not saying anything of the sort. You can't prove a negative - i.e. you can't prove for certain that crashes AREN'T happening as often as with human drivers. What you can do, however, is provide counter-evidence to prove that they ARE happening as often or more often. He's just asking people to argue honestly.

susan b buffering
Nov 14, 2016

comedyblissoption posted:

we could have instant massive reduction in automobile fatalities if we retrofitted the existing infrastructure to just use massive amounts of buses, but obviously this isn't a reasonable short-term solution because our society is controlled by people who have too much of a stake in the status quo

i dream about this literally every time i have to drive in Atlanta rush hour traffic

susan b buffering
Nov 14, 2016

a hot gujju bhabhi posted:

He's not saying anything of the sort. You can't prove a negative - i.e. you can't prove for certain that crashes AREN'T happening as often as with human drivers. What you can do, however, is provide counter-evidence to prove that they ARE happening as often or more often. He's just asking people to argue honestly.

that’s not what “you can’t prove a negative” means

Volte
Oct 4, 2004

woosh woosh

a hot gujju bhabhi posted:

Uh, no it's not irrelevant, you just want it to be because that's the bit that makes your argument stupid as gently caress.
"Getting hit by a car being driven by a person" and "getting hit by an unmanned machine gone haywire" are in fact two different things.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Dumb Lowtax posted:

so you just buy the marketing premise that self-driving cars work as advertised as a given truth then, and the burden of proof is on us, instead of on them to prove that it's somehow NOT actually dumber than a four year old's understanding of space and movement and human behavior?

in the coding horrors thread no less

the stupid are cocksure while the intelligent are full of doubt

What, I didn't claim any of that.

My prior is, in fact, think it's entirely likely that current automation is dumber than a four year old. I'm just not going to express that confidently without the data.

Thermopyle fucked around with this message at 04:33 on Nov 8, 2019

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop
Ok here's data


(Since you felt so compelled to defend something that's conspicuously irresponsible in how it is built, marketed, and deployed. There are better things to second-guess and demand receipts for. Here are the receipts anyway. Many here have already seen it.)

Happy Thread fucked around with this message at 06:25 on Nov 8, 2019

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

tankadillo posted:

https://twitter.com/DavidZipper/status/1111077290082009094

Since autonomous cars can’t solve their problems with code, their success is contingent on making police be more aggressive towards pedestrians instead.

If we accept for the sake of argument that it would be fine to do what the automotive industry rep is reported to have suggested, that is still a very North America-centric view of things. Do they want to sell this technology in other parts of the world? I assume they do. But in my country (the UK), there is no concept in law of "jaywalking". There is of course a notion of when it is and isn't safe to cross the road, but there's nothing in law that sets out when it is a crime to walk across the road*. What does the guy want to do about that to make life easier for his dumb car AI?

* to prevent pedantry, I looked this up and I acknowledge in this footnote that there are particular roads such as motorways for which it is actually a crime, but motorway driving is not the kind of driving environment we're talking about here.

Pollyanna
Mar 5, 2005

Milk's on them.


This entire conversation is a great example of why “stop putting politics in my hobby” is so stupid. There is no running away.

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

Volte posted:

"Getting hit by a car being driven by a person" and "getting hit by an unmanned machine gone haywire" are in fact two different things.
I think the assumption made by people saying "it doesn't have to be safe, only safer" is that they are probably not so different that we, at least as a society, care about the difference as long as they net out to a reduction in deaths and injuries. It's true that, in a future where autonomous cars are common and manage to somehow reduce total vehicle-related fatalities, the family of the person who got hit by the autonomous car is probably going to be upset and blame autonomous cars in general ("If there had been a person behind the wheel, John would still be alive!"), and likewise the family of the person hit by a human controlled car is going to be upset and ask why the driver was so irresponsible and insisted on driving themselves instead of using an AI car.

That said, you are probably correct that if the breakdown of deaths is significantly different, then we'll have to, as a society, decide if the trade-off is worth it. As an extreme example, if autonomous cars got rid of all traffic fatalities except for kids at school bus stops, which quintupled, would that be a trade-off we'd be willing to make? Is going from 35k+ deaths (including ~1,200 children) to ~100 deaths a year, but all children, a choice our society would make?

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err

quote:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

So we would probably be OK with a change that made human drivers only kill 100 children per year, but if that same death toll comes from a self-driven car, it's a problem.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Jethro posted:

I think the assumption made by people saying "it doesn't have to be safe, only safer" is that they are probably not so different that we, at least as a society, care about the difference as long as they net out to a reduction in deaths and injuries. It's true that, in a future where autonomous cars are common and manage to somehow reduce total vehicle-related fatalities, the family of the person who got hit by the autonomous car is probably going to be upset and blame autonomous cars in general ("If there had been a person behind the wheel, John would still be alive!"), and likewise the family of the person hit by a human controlled car is going to be upset and ask why the driver was so irresponsible and insisted on driving themselves instead of using an AI car.

That said, you are probably correct that if the breakdown of deaths is significantly different, then we'll have to, as a society, decide if the trade-off is worth it. As an extreme example, if autonomous cars got rid of all traffic fatalities except for kids at school bus stops, which quintupled, would that be a trade-off we'd be willing to make? Is going from 35k+ deaths (including ~1,200 children) to ~100 deaths a year, but all children, a choice our society would make?

The problem is not the existence of the decision but rather, who is making the decision. It's too important to be left to a bunch of software developers and their bosses who are motivated only by how to make as much money as possible.

Volte
Oct 4, 2004

woosh woosh

Jethro posted:

I think the assumption made by people saying "it doesn't have to be safe, only safer" is that they are probably not so different that we, at least as a society, care about the difference as long as they net out to a reduction in deaths and injuries.
Yes, that is the assumption and it is obviously wrong as far as I'm concerned. There's a bit of a smokescreen here in the rhetoric by saying cars are "self-driving" and calling it "intelligence" and using words like "learning". Personifying algorithms is the new thing. It makes it easy to forget that this isn't a new kind of hyper-smart driver we're talking about here, this is a completely new kind of vehicle. It's as different from regular cars as cars were from horse-drawn carriages, if not moreso. It's shaped the same and shares the space but otherwise, you might as well forget everything you thought you know about cars and how they behave on the road. Coexisting safely with human-controlled vehicles both as a pedestrian and as a driver is largely about communication and awareness. Coexisting safely with autonomous vehicles as a pedestrian and as a driver is about uh, I dunno, faith? Not much you can do to communicate with an autonomous vehicle, and not much you can do to predict its behaviour either. I'm sure most of us have seen suspicious behaviour from drivers on the road that make us think "oh that person is drunk" or "that person loving sucks at driving", and give them lots of room. Humans recognize human behaviour. It's what lets us walk down the sidewalk with cars zooming past 4 feet away without constantly being afraid that one of them is going to jump the curb. There's nothing you can do to predict an autonomous vehicle suddenly flooring it onto the sidewalk because its internal memory flipped a bit after 5 years of flawless operation.

In other words, "it doesn't have to be safe, just safer" is a bit psychotic here. It does have to be safe, and don't let sci-fi "building a better thinking machine" jargon fool you into thinking otherwise. Pedestrians aren't collateral damage in some war for the greater good. This is startup culture and car culture at their respective worsts teaming up to inextricably embed themselves into society at any cost and disingenuously appealing to your inner actuary to get you on board. If there is a future where all cars are autonomous, it needs to actually be safer for pedestrians and drivers alike. And that's going to mean a lot more than building a better algorithm. It's going to require infrastructural changes, designated areas where autonomous vehicles can go that are physically separate from pedestrian areas, and fewer vehicles on the road in general. Self-driving cars is a solution to the wrong problem. The answer is, and always has been, reducing the number of drivers on the road, not replacing them with unaccountable automatons. What if all the money that has been spent on autonomous driving up to this point had been spent on expanding public transit for everyone?

ultrafilter posted:

Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err

So we would probably be OK with a change that made human drivers only kill 100 children per year, but if that same death toll comes from a self-driven car, it's a problem.
That's about a different type of algorithm. Predicting the outcome of some mathematical model is not the same thing as being able to safely interact autonomously with other humans.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Dumb Lowtax posted:

Ok here's data


(Since you felt so compelled to defend something that's conspicuously irresponsible in how it is built, marketed, and deployed. There are better things to second-guess and demand receipts for. Here are the receipts anyway. Many here have already seen it.)

1. I didnt defend autonomous tech.
2. "so compelled" implies some herculean effort. I posted a few characters in response to a somewhat logically weak assertion.
3. What makes other things "better" to ask receipts for? Is it OK to form beliefs about things with weak data just because it's politically expedient?

Xarn
Jun 26, 2015
There is a thread for stupid tech slapfights: https://forums.somethingawful.com/showthread.php?threadid=3763277

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Thermopyle posted:

1. I didnt defend autonomous tech.
2. "so compelled" implies some herculean effort. I posted a few characters in response to a somewhat logically weak assertion.
3. What makes other things "better" to ask receipts for? Is it OK to form beliefs about things with weak data just because it's politically expedient?

I think you're getting pushback here from those of us who've been following the Musk and Tech Bubble threads in YOSPoS where we've seen numbers on how Tesla's """Autopilot""" puts drivers in danger more than they put themselves in danger and people have at least done some napkin math (or maybe found actual sources?) showing that current self-driving tech has worse accident per mile numbers than people.

Also stories about how self-driving cars were running red lights in Pittsburgh but I can't find articles about that, even though it ought to be pretty easy :\

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Munkeymon posted:

I think you're getting pushback here from those of us who've been following the Musk and Tech Bubble threads in YOSPoS where we've seen numbers on how Tesla's """Autopilot""" puts drivers in danger more than they put themselves in danger and people have at least done some napkin math (or maybe found actual sources?) showing that current self-driving tech has worse accident per mile numbers than people.

Also stories about how self-driving cars were running red lights in Pittsburgh but I can't find articles about that, even though it ought to be pretty easy :\

Well, I've never opened those threads or knew they existed. It's actually pretty surprising to me that I got any pushback at all.

The weird thing is that I just asked for sources for the confident claim that autonomous tech is way worse than humans and it took a bit before anyone even attempted to do that...and none of the sources provided so far support the confidence of the original claim! Both the washington post and ars technica stories make the point in multiple places that the data we have doesn't support a strong position on the question at hand. (Though the washington post author tries to simultaneously spin it the other way)

What we do have is reasons to not support strong claims that autonomous tech is safer than humans, and given the stakes we should proceed as if it is way worse than humans.

"Proceeding as if" is not the same thing as "claiming it is so".

From the sources people have provided so far, it looks like that, as of 2016, the best available data (which is not great data) was that Tesla vehicles were worse than humans. This isn't enough to claim that "autonomous tech is way worse than humans".

I think it's likely that if we did have better data it would confirm the limited data we have from 2016, with some bit of improvement since then.

Tesla should be forced to be very, very transparent with it's data. I guess Tesla is the only one with a widespread autonomous deployment?

Thermopyle fucked around with this message at 19:33 on Nov 8, 2019

Ola
Jul 19, 2004

Proof by counter example: Autonomous cars haven't shown that they are safer than human drivers, because they have not been invented yet.

Tesla's autopilot is steering automation, not autonomy.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Thermopyle posted:

Well, I've never opened those threads or knew they existed. It's actually pretty surprising to me that I got any pushback at all.

The weird thing is that I just asked for sources for the confident claim that autonomous tech is way worse than humans and it took a bit before anyone even attempted to do that...and none of the sources provided so far support the confidence of the original claim! Both the washington post and ars technica stories make the point in multiple places that the data we have doesn't support a strong position on the question at hand. (Though the washington post author tries to simultaneously spin it the other way)

What we do have is reasons to not support strong claims that autonomous tech is safer than humans, and given the stakes we should proceed as if it is way worse than humans.

"Proceeding as if" is not the same thing as "claiming it is so".


You're right, if being a hair pedantic, IMO. It's just too easy for the average person to conflate those in either direction, which is the root of the problem: Elon claiming ""Autopilot"" is safe gets people thinking they can proceed as if their cars can drive them home drunk, which has happened at least once that I'm aware of.

quote:

From the sources people have provided so far, it looks like that, as of 2016, the best available data (which is not great data) was that Tesla vehicles were worse than humans. This isn't enough to claim that "autonomous tech is way worse than humans".

I think it's likely that if we did have better data it would confirm the limited data we have from 2016, with some bit of improvement since then.

Tesla should be forced to be very, very transparent with it's data. I guess Tesla is the only one with a widespread autonomous deployment?

I think the situation is that Tesla is the one running it on the public roads in California which is the only state to require disclosure to a state agency.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Ola posted:

Proof by counter example: Autonomous cars haven't shown that they are safer than human drivers, because they have not been invented yet.

Tesla's autopilot is steering automation, not autonomy.

Now this is something I have high confidence in.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Munkeymon posted:

You're right, if being a hair pedantic, IMO.

Just to be clear, I don't think I was being pedantic in the sense of "well actually you're wrong because of this pedantic thing"

I didn't ask for sources because I wanted to prove him wrong in Internet Arguing Master style, I asked for sources because I was surprised that someone could confidently make that claim about a subject that my surface-level knowledge led me to believe you couldn't be that confident.

QuarkJets
Sep 8, 2008

ultrafilter posted:

Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err


So we would probably be OK with a change that made human drivers only kill 100 children per year, but if that same death toll comes from a self-driven car, it's a problem.

That's not really a conclusion that you can derive from the results of that study.

That study is showing that people more quickly lose confidence in an algorithm when algorithms fail. That doesn't mean that people are happy with human failures. So if a machine that makes your coffee makes a lovely cup 1 in 10 times, you're going to lose confidence in that machine more quickly than if Gary fucks up making your coffee 1 in 10 times, but you're still mad at Gary for being a fuckup. Both failure rates are deemed unacceptable, but on average you're going to abandon the machine sooner than you'd abandon Gary because implicitly we understand that the machine probably isn't even capable of learning how to make a better cup of coffee but at least Gary has some hope of doing that.

Applied to autonomous vehicles, we wouldn't be okay with 100 children dying per year at the hands of human drivers, but we'd lose confidence in the overall ability of humans in general to avoid vehicular child murder more slowly than we'd lose confidence in algorithms to do the same. Even if this is a huge reduction in the actual death-by-vehicles rate for children, no one's going to be happy about it. And since basically everyone understands that most software is poo poo, when an algorithm fucks up and causes a car to ram into a jaywalker at full speed that just confirms what they already strongly suspected. This level of mistrust is probably fine in general, once you consider the potential scale of the problem. If 1 million cars all suffer from a common autonomous vehicle flaw, causing them to all have the driving effectiveness of a drunk person, the impact is much larger than if 1 person drives drunk.

QuarkJets fucked around with this message at 21:10 on Nov 8, 2019

BarbarianElephant
Feb 12, 2015
The fairy of forgiveness has removed your red text.
It's not one person driving drunk, though. Its x% of a population driving drunk.

And humans have other issues, like age-related degeneration turning good drivers into hazards, or overconfident idiots driving with their knees while texting and snacking.

QuarkJets
Sep 8, 2008

BarbarianElephant posted:

It's not one person driving drunk, though. Its x% of a population driving drunk.

And humans have other issues, like age-related degeneration turning good drivers into hazards, or overconfident idiots driving with their knees while texting and snacking.

No one's arguing that humans are perfect drivers. I'm disagreeing with a statement saying that we're okay with bad human behavior just because we're less tolerant of bad algorithmic behavior. Even an improvement in fatality rates is acknowledged as being good but not as preferable as 0 fatalities.

Steve French
Sep 8, 2003

On "it doesn't have to be safe, just safer":

Another reason there's more to it than that is that I bet for the most part people are thinking of safety per vehicle miles traveled; this starts to fall apart a bit if you acknowledge that widespread availability of autonomous vehicles may increase vehicle miles. Just as if we wishfully state that Lyft and Uber drivers are better/safer drivers than average because they do it all the time, that doesn't mean as much as it sounds if they result in people being driven somewhere rather than walking or riding a bus or bicycle.

Furthermore, safety as it relates to collisions is far from the only externality concerned, even in the limited scope of impact of increased miles (environmental concerns, economic impact of a more dispersed population, etc).

Adbot
ADBOT LOVES YOU

Happy Thread
Jul 10, 2005

by Fluffdaddy
Plaster Town Cop
It does take a herculean effort to avoid finding out that today's tech companies are not in responsible hands, and are driven by the stupidest whims of late-stage capitalism. To wonder if they'll succeed or not at making something safe misses the larger point -- they're lying about everything else so far, and we do not owe them benefit of the doubt or rigorous analysis.

For instance, they're lying by promising to fix the environment by putting more cars on the road. They seriously think we'll believe that. Not just any cars, just luxury cars only able to be driven by the rich 1%. Automatically, they promise. More polluting, traffic causing cars on the road are going to fix the looming climate crisis, when improving public transportation does it better and is easily on the table for less money. Choosing the former usually comes from a disdain for people who might enjoy public benefits. Believing that it's progress for climate change usually comes from a disingenuous concern-trolling approach when it comes to the environment. Autonomous, electric cars pollute just as much as any other, if not more -- besides the obvious disposal of hazardous battery waste, and the electricity being initially produced by burning coal, there's also the less obvious fact that these are not zero emissions cars at all. People forget that road dust makes up a big portion of emissions. Teslas kick up more of that than normal in fact, by being much heavier than cars in the same class. Airborne road dust increases with the eighth power of vehicle weight.

That's just one example of a lie that the entire autonomous industry is built on top of, that anyone can see makes the whole mission a farce. When it comes to specific companies like Tesla, it should be even easier to see. There's nothing but a constant torrent of high-profile lies getting busted in the news, including about how their autonomous tech claims to work, yet they remain at the forefront of public road testing simply because Musk knows no consequences will be faced for so brazen an attack on the public.

To avoid hearing all the lies of tech given the media's constant fixation on autonomous cars, to not identify them as lies as you hear them, to not become more immediately dismissive of the whole thing, does seem like it would take serious effort to do.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply