Platystemon posted:We really can’t. Which is why I posted the other two articles
|
|
# ? Dec 5, 2020 07:42 |
|
|
# ? Jun 8, 2024 06:16 |
|
https://i.imgur.com/kx4hz3o.mp4
|
# ? Dec 5, 2020 09:25 |
|
D-Pad posted:https://interestingengineering.com/how-safe-are-self-driving-cars So there are 2 issues with these statistics really. First one is really the manafacturer deciding who was at fault I the crash. Crashing whilst being technically correct is still crashing. The second one is with many of these crashes you can analyse why the crash happened. With the UBER fatality it's very easy to say that it was dark and the pedestrian was in the road, all mitigating circumstances. But the car identified the danger and decided not to stop when it could have. It doesn't matter if that crash was the only one ever, it was avoidable and a software problem killed someone. That needs to be corrected.
|
# ? Dec 5, 2020 09:28 |
|
Well, that's Chrysler products for you
|
# ? Dec 5, 2020 09:33 |
|
What scale is that?
|
# ? Dec 5, 2020 09:39 |
|
GotLag posted:What scale is that? Wow, I've seen it many times and never noticed it was a scale. Content, normally these guys can be cringey and clickbaity but this is a serious business video on milling a 4tonne rock drill bit https://www.youtube.com/watch?v=Mp_FPjh7kBA
|
# ? Dec 5, 2020 10:14 |
|
KoRMaK posted:https://i.imgur.com/lZQnNWU.mp4 As fellow goon CroatianAlzheimers once told me, there are only two types of bikers; those who haven't crashed and loving liars.
|
# ? Dec 5, 2020 10:22 |
|
BaldDwarfOnPCP posted:*orders a polonium 210 enema* Leaving that bathroom a superfund site. GotLag posted:That sounds more like multiple cases of cops who don't feel like investigating murder Honestly acab so probably Loving the dude in his door going all
|
# ? Dec 5, 2020 11:37 |
|
I really don’t get the Tesla fear mongering either. I don’t have opinions on “autonomous cars” as a whole because Tesla is the only one with a large amount of miles driven semiautonomously. If the measurement of that safety is miles between incidents autopilot seems to be much safer than the us average, at least according to the numbers reported by Tesla. 4.5M miles between crashes with autopilot engaged compared to a US average of 0.5M. There’s certainly flaws with this logic, maybe most important is that autopilot can only be engaged in lower risk scenarios, but that seems to be changing very soon. Additionally, the types of people who buy Tesla’s may be safer drivers than those who drive Maximas and Malibus, which have the highest MY2017 death rates. So at a minimum the fear mongering seems unsupported, but the consequences of acting on that fear mongering may mean 5-10x more accidents. There’s a ton of fear mongering around “plowing into pedestrians/children” every thread it’s mentioned in but here’s the euroncap rating on their cheapest car: https://www.euroncap.com/en/results/tesla/model-3/37573 It scores comparably in pedestrian safety to bmw sedans and the test report indicates the cyclist avoidance got full points and the pedestrian avoidance performed well. There’s billions of miles driven...so I just don’t get the cultish hate in the same way I don’t get supporting Trump or not wearing a mask in public in the US.
|
# ? Dec 5, 2020 14:05 |
|
one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works. while the people most in favour of it are people that see it as a magical black box that solves their problems. just something to think about.
|
# ? Dec 5, 2020 14:09 |
|
Jabor posted:one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works. This works in a LOT of threads.
|
# ? Dec 5, 2020 14:19 |
|
Jabor posted:one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works. See: Tesla’s general counsel
|
# ? Dec 5, 2020 14:53 |
|
Replace cars with trains. Replace airplanes with trains. Bing bang boom automation is now doable within our generation. Why not done? Ignoring any engineering limitations, I don't have faith in anything becoming popularly accepted that needs to seriously discuss actuarial in a public setting because there's like 5 humans who both really understand actuarial and don't get squeemish applying it to human life. And you know what those 5 folks are probably the weird ones. Still really sour about death panels forever being associated with single payer even though the death panels exist today in insurance companies.
|
# ? Dec 5, 2020 14:59 |
|
Jabor posted:one curious fact, that you might or might not find interesting, is that the people most cynical about it are the people that actually know how it (and the industry that produced it) works. I find your implication that I’m dumb and don’t understand pretty goon- Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most.
|
# ? Dec 5, 2020 15:02 |
|
Understanding the tech is different from understanding the industry
|
# ? Dec 5, 2020 15:05 |
|
And then understanding the economics of the situation is another step as well
|
# ? Dec 5, 2020 15:05 |
|
CarForumPoster posted:Im a mech E undergrad, MS Systems engineering and have developed/deployed a web app that can take a photo of any US coin since the 1860s and classify what it is (eg barber dime) with 97% accuracy using a DL model I gathered the data for and trained using pytorch. I’d say I have a better understanding than most. This is loving nothing like teaching a car to drive itself. You'll be in the ballpark when your lash-up can tell what year a Lincoln Penny was minted at night from twenty feet away while it's being thrown past the optical scanner by a MLB pitcher. Vincent Van Goatse fucked around with this message at 15:17 on Dec 5, 2020 |
# ? Dec 5, 2020 15:14 |
|
Imagine a car with a 97% chance of not driving you straight into oncoming traffic each time you took it out on the road.
|
# ? Dec 5, 2020 15:16 |
|
CarForumPoster posted:I find your implication that I’m dumb and don’t understand pretty goon- Cool now make it 99.9999%
|
# ? Dec 5, 2020 15:25 |
|
CarForumPoster posted:I find your implication that I’m dumb and don’t understand pretty goon- *slams down her mathematician card* I've worked on and with the maths behind modern pattern recognition systems (and sold some!), if you think a set of dumb rear end classifiers with some fuzzy logic on top can provide human like control impulses to a car without a bit of the old motor-slaughtering going on I don't know what to tell you. The problem is inherently non-computational, god knows why we're investing so much money in this poo poo as a species.
|
# ? Dec 5, 2020 15:27 |
|
Whoa look out we have an actual no poo poo mech E undergrad here, descended from on high to explain tesla autopilot to us.
|
# ? Dec 5, 2020 15:33 |
|
I wish this derail would go play in traffic.
|
# ? Dec 5, 2020 15:35 |
|
This derail could have been prevented by a computer vision system that could recognise a coin on the tracks with ninety‐seven percent accuracy.
|
# ? Dec 5, 2020 15:38 |
|
lmao, im an engineer who wrote an app so i understand cars is goony as gently caress and exactly how we ended up with tesla "autopilot." if you don't understand the safety concerns recognized by people with relevant experience, maybe consider it's because you don't have any? nah.
|
# ? Dec 5, 2020 15:42 |
|
CarForumPoster posted:I find your implication that I’m dumb and don’t understand pretty goon- You have access to free papers so go take a look at Bainbridge ‘83 for starters. It’s 40 years old but instantly shows the dangers of Tesla’s approach. if you have decent access go find Joint Cognitive Systems: Patterns in Cognitive Systems Engineering by David D. Woods; the last two chapters specifically cover concepts such as the context gap and Norbert’s Contrast which are great to provide context and founding ground for these ideas. Here’s a sample: quote:
|
# ? Dec 5, 2020 16:06 |
|
Jabor posted:Imagine a car with a 97% chance of not driving you straight into oncoming traffic each time you took it out on the road. Ah but you see I would simply drive into a tree three times myself and then statistically the next ninety-seven trips should be perfectly safe.
|
# ? Dec 5, 2020 16:15 |
|
*Goon wanders into thread and slams down Book-IT membership card* “As you can see, I’m entitled to one free personal pan pizza, so I think I can tell that AI-controlled drones are the future of pizza creation and delivery” E: more seriously, this shows why things like Tesla are so popular with technical people, as they know just enough to assume they know wtf they’re doing without actually getting over the Dunning-Krueger hump Gaukler fucked around with this message at 16:20 on Dec 5, 2020 |
# ? Dec 5, 2020 16:15 |
|
Hexyflexy posted:*slams down her mathematician card* I've worked on and with the maths behind modern pattern recognition systems (and sold some!), if you think a set of dumb rear end classifiers with some fuzzy logic on top can provide human like control impulses to a car without a bit of the old motor-slaughtering going on I don't know what to tell you. The problem is inherently non-computational, god knows why we're investing so much money in this poo poo as a species.
|
# ? Dec 5, 2020 16:28 |
|
Silicon Valley pioneered self-driving cars. But some of its tech-savvy residents don’t want them tested in their neighborhoods.quote:They’re familiar with the tech industry. That’s why they’re worried about what the self-driving revolution will entail. Also, take a look at Ironies of Automation: quote:This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the "classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.
|
# ? Dec 5, 2020 16:28 |
|
ultrafilter posted:This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the "classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration. I read this as " the human operator will do everything is his/her power to gently caress the thing up."
|
# ? Dec 5, 2020 16:41 |
|
ncumbered_by_idgits posted:I read this as " the human operator will do everything is his/her power to gently caress the thing up." I love studying this kind of problem, it's a total pain and relevant to half the stuff posted in this thread. Let's say you've abstracted out some industrial system so the operator, Steve, has a couple of shutdown buttons and dials to monitor something. Dial A goes over limit, you hit shutdown A', dial B goes over limit, you hit shutdown B'. Because he doesn't know what's happening behind the scenes, from his point of view if A and B go off limit at the same time, you punch A' and B'. Problem! If you hit B' within 15 seconds of A' (they were never designed to be on at the same time) a pressure hammer happens in the pipework of that particular petroleum plant that detonates it like a small nuclear bomb. Principle goes for a forklift as much as a nuclear plant. If you hide the details the humans effectively get more dumb with respect to the underlying system.
|
# ? Dec 5, 2020 17:00 |
|
ncumbered_by_idgits posted:I read this as " the human operator will do everything is his/her power to gently caress the thing up." This opinion is based on outdated HABA-MABA ideas (This is called the "Fitts" model, means "Humans Are Better At - Machines are Better At" -- also MABA-MABA, using "Men" rather than "Humans"). This model frames humans as slow, perceptive beings able of judgment, and machines are fast undiscerning indefatigable things. These things are, to be polite, a beginner's approach to automation design. It's based on scientifically outdated concepts, intuitive-but-wrong sentiments, and is comforting in letting you think that only the predicted results will happen and totally ignores any emergent behaviour. It operates on what we think we see now, not on stronger underlying principles, and often has strong limitations when it comes to being applied in practice. It is disconnected from the reality of human-machine interactions, and frames choices as binary when they aren't, usually with the intent of pushing the human out of the equation when you shouldn't. The big quote I posted is specifically about why this isn't true: the relationship between humans and machines is one where they need to be see as teammates who help each other, and not a situation where one needs to be pushed out of the way to prevent the mistakes of the other. Bad: car that asks you to shift into a monitor mode where you pay attention to everything happening to take over when the car hits its limits the same way if you were already driving good: car that tells you when you're doing a lane change and there's actually still something in the lane as a way to contextually supplement your attention span.
|
# ? Dec 5, 2020 17:03 |
|
Another super critical factor is organizational and it's got to do with the drift between work-as-done and work-as-imagined. In short: the way to make systems successful tends to rely on knowing which rules to bend and when to break them. Particularly, the idea of "following all the rules and regulations as diligently as possible" is a form of striking action called work-to-rule. The problem is one where we will collectively rely on people continuously breaking rules to make things functional, but when a failure happens you end up blaming the human for doing that very same thing. The rules in place are interpreted and modified all the time by people in the field figuring out the proper tradeoffs. The gap of what isn't covered by the rules and procedures is left to the expertise of operators, but that expertise of operators is called into action and required specifically when rules no longer apply. It's generally a bad situation to be in when you are told to never deviate from the rules but are expected to handle the case where the rules are no longer sufficient to operate things (Hexyflexy's example is a good one)
|
# ? Dec 5, 2020 17:09 |
|
The automation in this incident was low, but it‘s a good example of undertrained operators failing to deal with abnormal conditions, something that is going to become more common with creeping automation. https://www.youtube.com/watch?v=1zDcsjHyxr8
|
# ? Dec 5, 2020 17:10 |
|
MononcQc posted:The problem is one where we will collectively rely on people continuously breaking rules to make things functional, but when a failure happens you end up blaming the human for doing that very same thing. LOL Working any corporate job is like this, you're always technically doing something wrong that's a fireable offense, but you wouldn't be able to do the job otherwise.
|
# ? Dec 5, 2020 17:11 |
|
|
# ? Dec 5, 2020 17:39 |
|
D-Pad posted:I'd rather a Tesla drive itself over some human drivers I know getting behind it's wheel.
|
# ? Dec 5, 2020 18:12 |
|
All of you need to shut your uneducated gobs and listen to me, because I... Am an undergrad engineering student.
|
# ? Dec 5, 2020 18:14 |
|
CarForumPoster posted:I find your implication that I’m dumb and don’t understand pretty goon- So you've done stuff with images, are you able to see that it's bad to base a car autopilot entirely on image recognition? Because telsa only uses cameras and routinely thinks that an overpass 200 feet away is a truck and it needs to stop, and it also will think that the ground under a semi trailer is a driveable surface. Or the times it gets dazzled by the sun and slams into a truck. Or doesn't realize that cones are physical objects that shouldn't be hit. These problems are solveable with other tech like radar and lidar, but Elon is disrupting tech and he only wants to use cameras.
|
# ? Dec 5, 2020 18:15 |
|
|
# ? Jun 8, 2024 06:16 |
|
Tesla only recently developed object permanence LMAO.
|
# ? Dec 5, 2020 18:16 |