Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
D-Pad
Jun 28, 2006

Platystemon posted:

We really can’t.

The U.S. had one point one fatalities per hundred million vehicle miles traveled in 2019. Computer systems cannot be said to be safer than human drivers till they’ve driven at least that far without a single fatality.

Which is why I posted the other two articles

Adbot
ADBOT LOVES YOU

Kith
Sep 17, 2009

You never learn anything
by doing it right.


https://i.imgur.com/kx4hz3o.mp4

Aramoro
Jun 1, 2012




D-Pad posted:

https://interestingengineering.com/how-safe-are-self-driving-cars

I don't actually entirely agree with that analysis for the reasons outlined in the 2nd article below which is why I said I don't believe they are ready yet:

https://www.scientificamerican.com/article/are-autonomous-cars-really-safer-than-human-drivers/

This article is a great analysis of how we should approach the problem and make a decision as well as good statistics on human drivers. There is a lot to consider that your average American will never do:

https://www.forbes.com/sites/lancee...sh=7bec605546ed


So like I said I don't think that Tesla has any business releasing this into the wild at this time, but I do feel like for me personally, I will be ok with self driving cars on the road in the future sooner than the average American will. The first time one of them runs over a kid the technology will forever be evil in the eyes of the general public even if studies were to conclusively show that they drastically reduce child pedestrian deaths overall.

The psychology behind that is why I'm not sure if we ever will get full self driving cars because regulators are going to come down hard on them due to outrage from the public most likely in the next couple of years. Especially because a lot of these originate from silicon valley and the public and government is already starting to turn on those companies for lots of other reasons.

So there are 2 issues with these statistics really. First one is really the manafacturer deciding who was at fault I the crash. Crashing whilst being technically correct is still crashing.

The second one is with many of these crashes you can analyse why the crash happened. With the UBER fatality it's very easy to say that it was dark and the pedestrian was in the road, all mitigating circumstances. But the car identified the danger and decided not to stop when it could have. It doesn't matter if that crash was the only one ever, it was avoidable and a software problem killed someone. That needs to be corrected.

Sagebrush
Feb 26, 2012

ERM... Actually I have stellar scores on the surveys, and every year students tell me that my classes are the best ones they’ve ever taken.

Well, that's Chrysler products for you

GotLag
Jul 17, 2005

食べちゃダメだよ
What scale is that?

Humphreys
Jan 26, 2013

We conceived a way to use my mother as a porn mule


GotLag posted:

What scale is that?

Wow, I've seen it many times and never noticed it was a scale.

Content, normally these guys can be cringey and clickbaity but this is a serious business video on milling a 4tonne rock drill bit

https://www.youtube.com/watch?v=Mp_FPjh7kBA

BlackIronHeart
Aug 2, 2004

PROCEED

KoRMaK posted:

https://i.imgur.com/lZQnNWU.mp4
all that exposed skin all i can think about is one thing

As fellow goon CroatianAlzheimers once told me, there are only two types of bikers; those who haven't crashed and loving liars.

Kitfox88
Aug 21, 2007

Anybody lose their glasses?

BaldDwarfOnPCP posted:

*orders a polonium 210 enema*

Yeah I'm finally gonna poo poo right.

Leaving that bathroom a superfund site.

GotLag posted:

That sounds more like multiple cases of cops who don't feel like investigating murder

Honestly acab so probably


Loving the dude in his door going all :vince:

CarForumPoster
Jun 26, 2013

⚡POWER⚡
I really don’t get the Tesla fear mongering either. I don’t have opinions on “autonomous cars” as a whole because Tesla is the only one with a large amount of miles driven semiautonomously. If the measurement of that safety is miles between incidents autopilot seems to be much safer than the us average, at least according to the numbers reported by Tesla. 4.5M miles between crashes with autopilot engaged compared to a US average of 0.5M.

There’s certainly flaws with this logic, maybe most important is that autopilot can only be engaged in lower risk scenarios, but that seems to be changing very soon. Additionally, the types of people who buy Tesla’s may be safer drivers than those who drive Maximas and Malibus, which have the highest MY2017 death rates. So at a minimum the fear mongering seems unsupported, but the consequences of acting on that fear mongering may mean 5-10x more accidents.

There’s a ton of fear mongering around “plowing into pedestrians/children” every thread it’s mentioned in but here’s the euroncap rating on their cheapest car: https://www.euroncap.com/en/results/tesla/model-3/37573

It scores comparably in pedestrian safety to bmw sedans and the test report indicates the cyclist avoidance got full points and the pedestrian avoidance performed well. There’s billions of miles driven...so I just don’t get the cultish hate in the same way I don’t get supporting Trump or not wearing a mask in public in the US.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works.

while the people most in favour of it are people that see it as a magical black box that solves their problems.

just something to think about.

Humphreys
Jan 26, 2013

We conceived a way to use my mother as a porn mule


Jabor posted:

one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works.

while the people most in favour of it are people that see it as a magical black box that solves their problems.

just something to think about.

This works in a LOT of threads.

Platystemon
Feb 13, 2012

BREADS

Jabor posted:

one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works.

while the people most in favour of it are people that see it as a magical black box that solves their problems.

just something to think about.

See: Tesla’s general counsel

zedprime
Jun 9, 2007

yospos
Replace cars with trains. Replace airplanes with trains. Bing bang boom automation is now doable within our generation. Why not done?

Ignoring any engineering limitations, I don't have faith in anything becoming popularly accepted that needs to seriously discuss actuarial in a public setting because there's like 5 humans who both really understand actuarial and don't get squeemish applying it to human life. And you know what those 5 folks are probably the weird ones.

Still really sour about death panels forever being associated with single payer even though the death panels exist today in insurance companies.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Jabor posted:

one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works.

while the people most in favour of it are people that see it as a magical black box that solves their problems.

just something to think about.

I find your implication that I’m dumb and don’t understand pretty goon-:smug:

Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.

Space Kablooey
May 6, 2009


Understanding the tech is different from understanding the industry

Memento
Aug 25, 2009


Bleak Gremlin
And then understanding the economics of the situation is another step as well

Vincent Van Goatse
Nov 8, 2006

Enjoy every sandwich.

Smellrose

CarForumPoster posted:

Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.

This is loving nothing like teaching a car to drive itself.

You'll be in the ballpark when your lash-up can tell what year a Lincoln Penny was minted at night from twenty feet away while it's being thrown past the optical scanner by a MLB pitcher.

Vincent Van Goatse fucked around with this message at 15:17 on Dec 5, 2020

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Imagine a car with a 97% chance of not driving you straight into oncoming traffic each time you took it out on the road.

ethanol
Jul 13, 2007



CarForumPoster posted:

I find your implication that I’m dumb and don’t understand pretty goon-:smug:

Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.

Cool now make it 99.9999%

Hexyflexy
Sep 2, 2011

asymptotically approaching one

CarForumPoster posted:

I find your implication that I’m dumb and don’t understand pretty goon-:smug:

Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.

*slams down her mathematician card* I've worked on and with the maths behind modern pattern recognition systems (and sold some!), if you think a set of dumb rear end classifiers with some fuzzy logic on top can provide human like control impulses to a car without a bit of the old motor-slaughtering going on I don't know what to tell you. The problem is inherently non-computational, god knows why we're investing so much money in this poo poo as a species.

Son of Thunderbeast
Sep 21, 2002
Whoa look out we have an actual no poo poo mech E undergrad here, descended from on high to explain tesla autopilot to us.

madeintaipei
Jul 13, 2012

I wish this derail would go play in traffic.

Platystemon
Feb 13, 2012

BREADS
This derail could have been prevented by a computer vision system that could recognise a coin on the tracks with ninety‐seven percent accuracy.

Mimesweeper
Mar 11, 2009

Smellrose
lmao, im an engineer who wrote an app so i understand cars is goony as gently caress and exactly how we ended up with tesla "autopilot."

if you don't understand the safety concerns recognized by people with relevant experience, maybe consider it's because you don't have any? nah.

MononcQc
May 29, 2007

CarForumPoster posted:

I find your implication that I’m dumb and don’t understand pretty goon-:smug:

Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.

You have access to free papers so go take a look at Bainbridge ‘83 for starters. It’s 40 years old but instantly shows the dangers of Tesla’s approach.


if you have decent access go find Joint Cognitive Systems: Patterns in Cognitive Systems Engineering by David D. Woods; the last two chapters specifically cover concepts such as the context gap and Norbert’s Contrast which are great to provide context and founding ground for these ideas. Here’s a sample:

quote:


Literal-mindedness creates the risk that a system can’t tell if its model of the world is the world it is actually in (Wiener, 1950). As a result, the system will do the right thing [in the sense that the actions are appropriate given its model of the world], when it is in a different world [producing quite unintended and potentially harmful effects]. This pattern underlies all of the coordination breakdowns between people and automation.
[…]
The context gap is the need to test whether the situation assumed in the model underlying the algorithm to be deployed matches the actual situation in the world. Monitoring this gap is fundamental to avoiding the error of the third kind (solving the wrong problem), and to the demand for revision and re-framing in JCSs [Joint Cognitive Systems].
[…]
When a field of practice is about to experience an expanding role for automata, we can predict quite confidently that practitioners and organizations will adapt to develop means to align, monitor, and repair the context gap between the automata’s model of the world and the world. Limits to their ability to carry out these functions will mark potential paths to failure—breakdowns in resilience. In addition, we note that developers’ beliefs about the relationship of people and automation in complex and high consequence systems (substitution myth and other over-simplifications) lead designers to miss the need to provide this support and even to deny that such a role exists (but see Roth et al., 1987, Smith et al., 1997 and other studies of the brittleness of automata).

Now people, as a cognitive system, also are vulnerable to being trapped in literal-mindedness, where they correctly deploy a routine given their model of the world, but are in fact facing a different situation (Weick et al., 1999). When anomaly response breaks down, it is often associated with an inability to revise plans and assessments as new evidence arrives and as situations change.
[…]
But in observing work, we usually see practitioners probing and testing whether the routine, plan or assessment fits the actual situation they are facing (e.g., how mission control, as a JCS, works). This points to a fundamental difference between people and automata—people have the capability to repair the context gap for themselves (to some degree), but more powerfully, for and through others as part of a process of cross-checking and broadening checks across different perspectives.
[…]
    Norbert’s Contrast
   Artificial agents are literal minded and disconnected from the world while human agents are context sensitive and have a stake in outcomes.

The key is that people and computer automata start from opposite points—the former as context-bound agent and the latter as literal-minded agent—and tend to fall back or default to those points without the continued investment of effort and energy from outside.
Automata start: literal --> developers exert effort and inventiveness to move these computer systems to be more --> adaptive, situated, contextualized --> but there are always limits in this process requiring human mediation to maintain or repair the link between model and actual situation (the fundamental potential for surprise that drives the need for resilience).
On the other hand, people start contextualized --> developers exert effort and inventiveness to move these human systems toward --> more abstract, more encompassing models for effective analysis/action and away from local, narrow, surface models.

Norbert Weiner’s (1950) original analysis revealed that literal-minded agents are paradoxically unpredictable because they are not sensitive to context. The now classic clumsy automation quotes from studies of cockpit automation—“What’s it doing now? Why is it doing that? What will it do next?“—arise from the gap between the human flight crew as context sensitive agents grounded in the world, and the literal mindedness of the computer systems that fly the aircraft.

The computer starts from and defaults back to the position of a literal-minded agent. Being literal-minded, a computer can’t tell if its model of the world is the world it is in. This is a by-product of the limits of any model to capture the full range of factors and variations in the world. A model or representation, as an abstraction, corresponds to the referent processes in the world only in some ways. Good models capture the essential and leave out the irrelevant; the catch is that knowing what is essential and irrelevant depends on the goal and task context.
  As Ackoff (1979, p. 97) put it,
The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem,  which it never is.

This “catch” on models captures the need to re-visit the connection between the model being deployed in the form of algorithms, plans, procedures, routines or skills and the actual conditions being faced, given change and the changing flow of evidence. This is the context gap that needs to be closed, as we saw in previous discussions of revising assessments, reframing conceptualizations, modifying plans in progress.
[…]
It is up to people situated in context to ground computer processing in the world, given that particular situations can arise to challenge the boundary conditions of the model behind the algorithm (the potential for surprise). For people in context there is an open future, while literal-minded agents are stuck within the walls of the model underlying their operation. […]

Closing the context gap is about knowing and testing what “rules” apply in what kind of situation. The key is to determine the kind of situation faced, and to recognize how situations can change. People can be sensitive to cues that enable them to switch rules or routines as they test whether the situation is different from what was originally construed. And despite not always performing well at this function (getting stuck in one view; pp. 76-77 and 104-105), people provide the only model of competence at re-framing or re-conceptualizing that we can study for clues about what contributes to expert performance and how to support it.
[…]
With advances (developer effort), computer agents can be made more situated in the sense of taking more factors into account in their own processing. These automata still function well only within artificially bounded arenas (although the size and position of these arenas grow and shift as people learn to produce new capabilities). Literal-minded automata are always limited by the brittleness problem—in other words, however capable, developer effort stretches automata away from their literal-minded origin, but without chronic effort to ground these systems in context, the risk of literal-mindedness re-emerges.

This work to close the context gap tries to keep the environment where the automata are placed in alignment with the assumptions underlying them. However, the effort to ground literal-minded automata in context often occurs in the background, hidden. Nevertheless, it takes continued effort to maintain this match of model-environment in the face of the eroding forces of variability and change inherent in our physical world (the potential for surprise inherent in the Law of Requisite Variety). And, as the envelope of competence of automata shifts, the resulting new capability will afford adaptation by stakeholders in the pursuit of goals. This creates new contexts of use that may challenge the boundary conditions of the underlying algorithms (as captured in the Law of Stretched Systems, p. 18, and as has been experienced in software failures that arose from use migration and requirements change).

Thus, Norbert’s Contrast specifies two complementary vectors:
(1) The capabilities of automata grow as (some) people learn to create and instantiate new algorithms, plans and routines;
(2) In parallel, people at the sharp end adapt to the brittleness of literal-minded agencies to monitor, align and repair the context gap omnipresent in a changing uncertain world (and the change results, in part, from how people adapt to exploit the new capabilities.)

The contrast captures a basic tradeoff that must be balanced in the design of any JCS because of the basic constraint of bounded rationality of finite resources and irreducible uncertainty (p. 2). Literal-minded and context-bound agents represent different responses to the tradeoff, each with different vulnerabilities. Norbert’s Contrast points out that literal, disconnected agents can be powerful help for context bound agents, but the literal-minded agents in turn need help to be grounded in context (Clancey, 1997). Coordinating or complementing different kinds of cognitive agents (literal-minded and context-bound) in a joint cognitive system is the best strategy for handling the multiple demands for performance in a finite-resource, uncertain, and changing universe.

PurpleXVI
Oct 30, 2011

Spewing insults, pissing off all your neighbors, betraying your allies, backing out of treaties and accords, and generally screwing over the global environment?
ALL PART OF MY BRILLIANT STRATEGY!

Jabor posted:

Imagine a car with a 97% chance of not driving you straight into oncoming traffic each time you took it out on the road.

Ah but you see I would simply drive into a tree three times myself and then statistically the next ninety-seven trips should be perfectly safe.

Gaukler
Oct 9, 2012


*Goon wanders into thread and slams down Book-IT membership card*

“As you can see, I’m entitled to one free personal pan pizza, so I think I can tell that AI-controlled drones are the future of pizza creation and delivery”

E: more seriously, this shows why things like Tesla are so popular with technical people, as they know just enough to assume they know wtf they’re doing without actually getting over the Dunning-Krueger hump

Gaukler fucked around with this message at 16:20 on Dec 5, 2020

Pigsfeet on Rye
Oct 22, 2008

I'm meat on the hoof

Hexyflexy posted:

*slams down her mathematician card* I've worked on and with the maths behind modern pattern recognition systems (and sold some!), if you think a set of dumb rear end classifiers with some fuzzy logic on top can provide human like control impulses to a car without a bit of the old motor-slaughtering going on I don't know what to tell you. The problem is inherently non-computational, god knows why we're investing so much money in this poo poo as a species.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Silicon Valley pioneered self-driving cars. But some of its tech-savvy residents don’t want them tested in their neighborhoods.

quote:

They’re familiar with the tech industry. That’s why they’re worried about what the self-driving revolution will entail.

Also, take a look at Ironies of Automation:

quote:

This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the "classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.

ncumbered_by_idgits
Sep 20, 2008

ultrafilter posted:

This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the "classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.

I read this as " the human operator will do everything is his/her power to gently caress the thing up."

Hexyflexy
Sep 2, 2011

asymptotically approaching one

ncumbered_by_idgits posted:

I read this as " the human operator will do everything is his/her power to gently caress the thing up."

I love studying this kind of problem, it's a total pain and relevant to half the stuff posted in this thread. Let's say you've abstracted out some industrial system so the operator, Steve, has a couple of shutdown buttons and dials to monitor something. Dial A goes over limit, you hit shutdown A', dial B goes over limit, you hit shutdown B'.

Because he doesn't know what's happening behind the scenes, from his point of view if A and B go off limit at the same time, you punch A' and B'. Problem! If you hit B' within 15 seconds of A' (they were never designed to be on at the same time) a pressure hammer happens in the pipework of that particular petroleum plant that detonates it like a small nuclear bomb.

Principle goes for a forklift as much as a nuclear plant. If you hide the details the humans effectively get more dumb with respect to the underlying system.

MononcQc
May 29, 2007

ncumbered_by_idgits posted:

I read this as " the human operator will do everything is his/her power to gently caress the thing up."

This opinion is based on outdated HABA-MABA ideas (This is called the "Fitts" model, means "Humans Are Better At - Machines are Better At" -- also MABA-MABA, using "Men" rather than "Humans"). This model frames humans as slow, perceptive beings able of judgment, and machines are fast undiscerning indefatigable things.

These things are, to be polite, a beginner's approach to automation design. It's based on scientifically outdated concepts, intuitive-but-wrong sentiments, and is comforting in letting you think that only the predicted results will happen and totally ignores any emergent behaviour. It operates on what we think we see now, not on stronger underlying principles, and often has strong limitations when it comes to being applied in practice.

It is disconnected from the reality of human-machine interactions, and frames choices as binary when they aren't, usually with the intent of pushing the human out of the equation when you shouldn't. The big quote I posted is specifically about why this isn't true: the relationship between humans and machines is one where they need to be see as teammates who help each other, and not a situation where one needs to be pushed out of the way to prevent the mistakes of the other.

Bad: car that asks you to shift into a monitor mode where you pay attention to everything happening to take over when the car hits its limits the same way if you were already driving
good: car that tells you when you're doing a lane change and there's actually still something in the lane as a way to contextually supplement your attention span.

MononcQc
May 29, 2007

Another super critical factor is organizational and it's got to do with the drift between work-as-done and work-as-imagined. In short: the way to make systems successful tends to rely on knowing which rules to bend and when to break them.

Particularly, the idea of "following all the rules and regulations as diligently as possible" is a form of striking action called work-to-rule. The problem is one where we will collectively rely on people continuously breaking rules to make things functional, but when a failure happens you end up blaming the human for doing that very same thing.

The rules in place are interpreted and modified all the time by people in the field figuring out the proper tradeoffs. The gap of what isn't covered by the rules and procedures is left to the expertise of operators, but that expertise of operators is called into action and required specifically when rules no longer apply.

It's generally a bad situation to be in when you are told to never deviate from the rules but are expected to handle the case where the rules are no longer sufficient to operate things (Hexyflexy's example is a good one)

Platystemon
Feb 13, 2012

BREADS
The automation in this incident was low, but it‘s a good example of undertrained operators failing to deal with abnormal conditions, something that is going to become more common with creeping automation.

https://www.youtube.com/watch?v=1zDcsjHyxr8

LifeSunDeath
Jan 4, 2007

still gay rights and smoke weed every day

MononcQc posted:

The problem is one where we will collectively rely on people continuously breaking rules to make things functional, but when a failure happens you end up blaming the human for doing that very same thing.


LOL Working any corporate job is like this, you're always technically doing something wrong that's a fireable offense, but you wouldn't be able to do the job otherwise.

uranium grass
Jan 15, 2005


wesleywillis
Dec 30, 2016

SUCK A MALE CAMEL'S DICK WITH MIRACLE WHIP!!

D-Pad posted:

I'd rather a Tesla drive itself over some human drivers I know getting behind it's wheel.
I'm sure it would drive right the gently caress over them, but just because they're bad drivers doesn't mean we should execute them.

rotinaj
Sep 5, 2008

Fun Shoe
All of you need to shut your uneducated gobs and listen to me, because I...

Am an undergrad engineering student.

:laffo:

Cojawfee
May 31, 2006
I think the US is dumb for not using Celsius

CarForumPoster posted:

I find your implication that I’m dumb and don’t understand pretty goon-:smug:

Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.

So you've done stuff with images, are you able to see that it's bad to base a car autopilot entirely on image recognition? Because telsa only uses cameras and routinely thinks that an overpass 200 feet away is a truck and it needs to stop, and it also will think that the ground under a semi trailer is a driveable surface. Or the times it gets dazzled by the sun and slams into a truck. Or doesn't realize that cones are physical objects that shouldn't be hit. These problems are solveable with other tech like radar and lidar, but Elon is disrupting tech and he only wants to use cameras.

Adbot
ADBOT LOVES YOU

Platystemon
Feb 13, 2012

BREADS
Tesla only recently developed object permanence LMAO.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply