Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Solkanar512 posted:

If you're going to make this complaint, would you at the very least show us all what sort of data you want, how much bandwidth it would take to transmit it, what would be needed to receive and store such data and around how many flights/day there are that would need to have such systems available worldwide?

Then you might understand that there are still some practical limitations to this complaint, especially in cases where someone likely committed an act of sabotage in a single flight out of tens of millions that year. I'm not saying this is impossible, but it's only fair to point out that it's really hard to do everywhere.

...are you just being contrarian? There are real products that do this: https://www.theguardian.com/business/2016/jun/30/uk-satellite-firm-inmarsat-helped-track-mh370-fit-sbs-on-airbus-jets

There is no good reason why the same satellite feeds that provide 100+mbps realtime into an aircraft arent used for monitoring.


ElCondemn posted:

Just so that you understand why these things work the way they do I'll explain a little bit about how the technology works. Wifi technology like all radio technology is limited by a few factors, line of sight, broadcast strength (at both ends) and frequency (which affects penetration but also how it interacts with our ionosphere).

Short wave radio transmissions (in the 100s of megahertz) can transmit pretty far with relatively low power requirements because the signals bounce off the ionosphere, but because of the low frequency and broadcast power available bandwidth is quite low. As you increase power and frequency you can fit more bits into the pipe but it results in shorter distances. There are formulas that explain all of this and actually quite a few encoding strategies that make this more efficient but I wont go into that.

Essentially the problem is that planes don't have line of sight to a ground station so it's not really possible to send high frequency/high bandwidth transmissions reliably, especially with intercontinental flights. There are definitely solutions like Solkanar said, sending your signal to a satellite as a relay, but if you're using normal short/medium-wave transmissions you're limited in bandwidth (though I'm not sure how much bandwidth would be necessary really). There are solutions to that as well, for instance there are a handful of companies working on satellite communication networks using microsats to create a mesh network, that way they can relay/route between any number of nodes and then beam it back to ground stations via microwave. But all of this is pretty pointless to discuss because we have solutions but the problem may not be large enough to warrant the expense of building a microsat network just so that we have close to real time data from planes.


So what? The Boeing's and Airbus's of the world constantly design and implement new technologies, they haven't been sitting around doing things the same way for ages, the industry is more automated and computerized than ever. I think they understand what needs to be done to approve changes, especially if it would save them money.


Well you said "When folks like you jump into my industry without a proper respect for the lengths we go to for safety you're going to end up killing people" as though folks are jumping in and killing people. You're just scaremongering because that isn't how it works, by your own admission.

You also don't understand the "fail early fail often" idea if your complaining that it's going to lead to deaths. The methodology, which isn't what anyone is even proposing in this thread, just has to do with creating minimum viable products. If you're designing a plane with the fail fast model in mind you would iterate a lot, make small changes very rapidly so that you minimize the number of variables during each test. It also ensures you don't put all your eggs in one basket. When you make a big plan and stick to it for months (or even years) on end you might end up going down a path that isn't fruitful. By making small incremental changes and testing them early you can see if you're heading in the right direction a lot quicker.


I've addressed the issues you've brought up, what is it that isn't being addressed that you keep complaining people are hand waiving? We're talking about your issues right now, why do you keep saying we're ignoring them?

Sorry dude Ku/Ka band transmitters are what the aircraft use to and from satellites, and they are well known enough that as linked inmarsat is going to use them to do what i want anyway.

1mbps / airliner = ~ 5Gbps which is well within the terabit/satellite realtime bandwidth.

And that's waayyy above what you need for real-time cockpit audio let alone GPS

It's a no-brainer, the only problem is airlines bitching about pennies and pilots complaining.

Malcolm XML fucked around with this message at 20:46 on Aug 13, 2017

Adbot
ADBOT LOVES YOU

Bar Ran Dun
Jan 22, 2006




A Buttery Pastry posted:

Man can't create an ensouled being, only God.

Most things that happen are repeatable. I think consiousness is possible to repeat. No guesses as to a timeline on that.

ElCondemn posted:

Maybe I didn't get your initial point. What you're essentially saying is because we built the computer it has limitations inherent to us. But you also just admitted that we can make computers that respond in ways we didn't program them to.

Even though computers can produce solutions we didn't think of you have a problem with the "model"? I'm not really sure what you are trying to say, it seems like you think computers are inferior to humans because we created them?

I'm saying we still set the constraints, objectives, assumptions, etc. Some of the models we give automation or that program we create give automation are way better than models we hold in our heads. Some not so much. I'm not saying one or the other is better, they are just different. They all (including the ones in our heads) also have the limitations inherent in all models.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

asdf32 posted:

Did you get your analogy wrong (chess is easier for AI than GO)? Planes are indeed easier to automate - the challenge automating a plane is the complexity of a plane but they fly in open and heavily regulated space and only touch down in special locations which are loaded with instruments and infrastructure. A plane flying at 30,000 ft has huge margins of error.

Cars are simpler but their environment is far more complex which is a harder problem to solve and at highway speed the margin of error is a fraction of a second between an AI mistake and passenger (or pedestrian) death.

Thanks, I didn't consider that.

Solkanar512 posted:

I have no problem considering changes, I have a problem with folks like you and OOCC who are completely unwilling to address the difficulties and risks of getting from the status quo to your automated future.

They have been doing that - addressing difficulties and risks which you bring up. You haven't been very good at it, though, because your posting is mostly irritated demands for respect.

What you might be getting wrong is that when IT people say that replacing pilots would be easy, they primarily think about the software aspect of the problem. Can AI do it in the immediate future or not? Apparently Yes - at least I haven't seen any good arguments against. It might not happen soon because of public reactions, union agreements, legislature, adapting planes, changes to flight control, etc... But it's just a matter for the industry to gauge whether it would be worth it in 10 years or 30.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Doctor Malaver posted:


What you might be getting wrong is that when IT people say that replacing pilots would be easy, they primarily think about the software aspect of the problem. Can AI do it in the immediate future or not? Apparently Yes - at least I haven't seen any good arguments against. It might not happen soon because of public reactions, union agreements, legislature, adapting planes, changes to flight control, etc... But it's just a matter for the industry to gauge whether it would be worth it in 10 years or 30.

Also the joke from 30 entire years ago about the flight crew of the future being a man and a dog with the man feeding the dog and the dog biting the man if he touches the controls. The actual trend is autopilot taking over more and more features till the dude is practically irrelevant but there anyway so people feel good about it. Although a bunch of passenger train systems did that then just dropped the dude eventually anyway.

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
I feel like this is one of those discussions that will always be talking past one another until people address one of the key tenets of safety systems. Albeit my experience is in small scale maritime, not aviation, but realistically the only way to truly maintain safety is to ensure that there is depth in defense.

Automation tends to strip layers of defense rather than add to them. It's why mariners still learn to position fix with radar and even celestial, because GPS has a margin of error and even if it's correct or at least accurate enough in almost all cases, the defense in depth ensures the mariner can be confident in their position.

Communication systems are doubled up with redundant generators. Critical physical checks are doubled. All this in service of ensuring human and/or mechanical error doesn't propagate all the way to a point where an incident takes place.

Automation doesn't just take an existing job and make it robotic, it tends the change the job to fit the robot, and in the case of outside supported automation, there's a clear failure point that 'two of them' won't necessarily add sufficient depth to.

Losing control of a tin can that contains a couple 100 people seems a highly undesirable scenario, so until the defense in depth question can be answered (and often it can be by a trained individual on the plane), automation in aviation will experience resistance.

Dignity Van Houten
Jul 28, 2006

abcdefghijk
ELLAMENNO-P


Solkanar512 posted:

If you're going to make this complaint, would you at the very least show us all what sort of data you want, how much bandwidth it would take to transmit it, what would be needed to receive and store such data and around how many flights/day there are that would need to have such systems available worldwide?

Then you might understand that there are still some practical limitations to this complaint, especially in cases where someone likely committed an act of sabotage in a single flight out of tens of millions that year. I'm not saying this is impossible, but it's only fair to point out that it's really hard to do everywhere.

There aren't any practical limitations in TYOOL 2017. You could store the coordinates of every plane in the air in the world in a single textfile on one hard drive.

http://www.telegraph.co.uk/travel/travel-truths/how-many-planes-are-there-in-the-world/amp/

39,000 planes in the world. Let's say they're all flying at once, and once every second they transmit coordinates.

Let's be generous and say 100 bytes per plane per minute to broadcast a couple numbers. So, 4 megabytes a minute for GPS coordinates for every plane in the entire world. My laptop could store data for three days before running out of space, for every plane in the world. There isn't a technical limitation anymore.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Maluco Marinero posted:



Automation tends to strip layers of defense rather than add to them. It's why mariners still learn to position fix with radar and even celestial, because GPS has a margin of error and even if it's correct or at least accurate enough in almost all cases, the defense in depth ensures the mariner can be confident in their position.


Good news: the navy agrees with you and has totally automated celestial navigation and uses it in missiles and stuff since a missile is totally a thing the enemy would try to jam GPS on.

http://ad.usno.navy.mil/forum/kaplan2.pdf

ElCondemn
Aug 7, 2005


BrandorKP posted:

I'm saying we still set the constraints, objectives, assumptions, etc. Some of the models we give automation or that program we create give automation are way better than models we hold in our heads. Some not so much. I'm not saying one or the other is better, they are just different. They all (including the ones in our heads) also have the limitations inherent in all models.

I'm not really understanding how you think AI is supposed to work if not by being given input and letting it make decisions using those inputs. Can you explain how your AI concept is supposed to solve for flight problems if it isn't being given objectives and sensor data? Even a general AI will need to have some kind of feedback system, just because humans created the feedback system doesn't mean it's inherently flawed or limited. Not that I'm saying AI is easy, I'm just saying for what we're trying to do (fly a plane) computers can learn to be better than a human pilot, and nothing about how we would teach a computer to fly would make it less proficient than a human. The people designing the AI certainly don't have to be the best fliers to make a robot that can surpass their flying capability either.

Maluco Marinero posted:

I feel like this is one of those discussions that will always be talking past one another until people address one of the key tenets of safety systems. Albeit my experience is in small scale maritime, not aviation, but realistically the only way to truly maintain safety is to ensure that there is depth in defense.

Automation tends to strip layers of defense rather than add to them. It's why mariners still learn to position fix with radar and even celestial, because GPS has a margin of error and even if it's correct or at least accurate enough in almost all cases, the defense in depth ensures the mariner can be confident in their position.

Communication systems are doubled up with redundant generators. Critical physical checks are doubled. All this in service of ensuring human and/or mechanical error doesn't propagate all the way to a point where an incident takes place.

Automation doesn't just take an existing job and make it robotic, it tends the change the job to fit the robot, and in the case of outside supported automation, there's a clear failure point that 'two of them' won't necessarily add sufficient depth to.

Losing control of a tin can that contains a couple 100 people seems a highly undesirable scenario, so until the defense in depth question can be answered (and often it can be by a trained individual on the plane), automation in aviation will experience resistance.

I'm not sure where you got this idea that automation leads to less defense in depth. Do you have any examples of this being the case? I'm not really seeing how increasing automation removes layers when typically automation adds layers of abstraction that weren't there before. Depth in defense is usually something we talk about when discussing security anyway so I'm not sure that your example really applies.

In your example you're talking about high availability and fault tolerance, this is achieved in many ways but a simple one would be to just have multiple sensors and compare the results. So in your case you wouldn't just trust the radar, your automated system would use radar and whatever other methods exist and you would have n+X number of each to meet your high availability requirements.

ElCondemn fucked around with this message at 00:49 on Aug 14, 2017

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
https://youtu.be/ZBaolsFyD9I

This is defense in depth as far as safety is concerned. The Autopilot hosed up, but was prevented from causing an accident by the human driver.

If you automate decision making which is what driving a vehicle is - whether its land, sea, or air - what verifies that those decisions are sound in a manner timely enough to rectify errors before an incident occurs?

If the answer is, let's make it so the AI is infallible, that feels like we're essentially putting our faith in the model being perfect or close to, to avoid incidents. If that model is imperfect, what's to stop a particular less travelled case always resulting in incidents that a human would've prevented?

At any rate, these are questions that must be answered for full unattended automation to work in regulated industries that have a safety record to protect.

Rastor
Jun 2, 2001

The AIs don't have to be infallible, merely demonstrably safer than humans. And such demonstrations are happening.

GEMorris
Aug 28, 2002

Glory To the Order!

Owlofcreamcheese posted:

okay, you got me, it can't be literally one guy that never eats or sleeps or pees.

It also can't be one guy because no human would have the cognitive ability to have mastery level operational skills for every type of airplane.

Also I love the fact that you can both envision a scenario where a human operator would be necessary, but are unable to envision a scenario where the functionality that would allow a pilot to perform their duties remotely is one of the things that stop functioning.

You need your backup system inside the machine, because unlike cars who can fail by stopping and pulling over (which yes occasionally results in deaths, but not frequently), airlines have a much more severe mode of failure.

GEMorris fucked around with this message at 02:11 on Aug 14, 2017

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.

Rastor posted:

The AIs don't have to be infallible, merely demonstrably safer than humans. And such demonstrations are happening.

Read the post fully, this isn't just about cars. Aircraft can't settle for demonstrably safer and then eschew the pilot. They have to have enough defense against failure to recover fallibility in a system for unattended autopilot to be considered.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

GEMorris posted:

It also can't be one guy because no human would have the cognitive ability to have mastery level operational skills for every type of airplane.


This seems like the boldest claim about cognition that anyone has made in this entire thread!

Rastor
Jun 2, 2001

Maluco Marinero posted:

Read the post fully, this isn't just about cars. Aircraft can't settle for demonstrably safer and then eschew the pilot. They have to have enough defense against failure to recover fallibility in a system for unattended autopilot to be considered.

Yeah, I know this isn't about cars. The only bar the AIs have to cross is "better than a human". That's the hurdle. That's it.

And if you think they aren't going to clear it you haven't been reading this thread.

Bar Ran Dun
Jan 22, 2006




ElCondemn posted:

I'm not really understanding how you think AI is supposed to work if not by being given input and letting it make decisions using those inputs. Can you explain how your AI concept is supposed to solve for flight problems if it isn't being given objectives and sensor data? Even a general AI will need to have some kind of feedback system, just because humans created the feedback system doesn't mean it's inherently flawed or limited. Not that I'm saying AI is easy, I'm just saying for what we're trying to do (fly a plane) computers can learn to be better than a human pilot, and nothing about how we would teach a computer to fly would make it less proficient than a human. The people designing the AI certainly don't have to be the best fliers to make a robot that can surpass their flying capability either.

Eh I'm talking more abstract and less the specific instance of planes. But you're also projecting a value judgement onto what I'm saying that isn't there. Maybe I can say this more clearly. A person flying a plane is doing so with a model in that person's head. A computer flying a plane is doing so with a model inside the computer. We people created all the models being discussed. There isn't anything special about the ones that exist in our brains. The distinction bwtween the two is a false category. Mental models and automation are both just technology.

ElCondemn posted:


I'm not sure where you got this idea that automation leads to less defense in depth. Do you have any examples of this being the case? I'm not really seeing how increasing automation removes layers when typically automation adds layers of abstraction that weren't there before. Depth in defense is usually something we talk about when discussing security anyway so I'm not sure that your example really applies.

I've seen it in loading computers. Computer dependant mates are dumb. I written about it in other threads damned if i can find it now. Sometimes having a model in the computer let's a less competent person get away with out having the appropriate models in thier head.

ElCondemn
Aug 7, 2005


Maluco Marinero posted:

https://youtu.be/ZBaolsFyD9I

This is defense in depth as far as safety is concerned. The Autopilot hosed up, but was prevented from causing an accident by the human driver.

If you automate decision making which is what driving a vehicle is - whether its land, sea, or air - what verifies that those decisions are sound in a manner timely enough to rectify errors before an incident occurs?

If the answer is, let's make it so the AI is infallible, that feels like we're essentially putting our faith in the model being perfect or close to, to avoid incidents. If that model is imperfect, what's to stop a particular less travelled case always resulting in incidents that a human would've prevented?

At any rate, these are questions that must be answered for full unattended automation to work in regulated industries that have a safety record to protect.

I think you may be confusing terms, you're looking for a fail-safe mechanism. In the video you posted the autopilot disengaged, for some reason, I can't speak to that but I would say it's probably not common. I don't know the specifics about how Tesla's are supposed to work but some quick googling shows they have eight cameras, 12 ultrasonic sensors and radar. The google self driving cars use a combination of the same plus lidar and GPS. All of these sensors are doing exactly what you describe, improving statistical confidence when weighting decisions in their algorithms. So like I keep saying, defense in depth is mostly talking about having layers to prevent access or usage, like firewalls or key cards or retina scanners, the problem it's aiming to solve is only to ensure that you're doing what you want to do (shoot a missle or whatever). The problem you're describing is one of high availability and fail-safes, and automation is great for that kind of thing.

GEMorris posted:

Also I love the fact that you can both envision a scenario where a human operator would be necessary, but are unable to envision a scenario where the functionality that would allow a pilot to perform their duties remotely is one of the things that stop functioning.

You need your backup system inside the machine, because unlike cars who can fail by stopping and pulling over (which yes occasionally results in deaths, but not frequently), airlines have a much more severe mode of failure.

So what do you expect a pilot in the cockpit to be able to do if there's a problem with the computer/flight controls? Can you explain to me the point at which an autopilot would be considered good enough? And is that point achieved by having a human pilot?

BrandorKP posted:

Eh I'm talking more abstract and less the specific instance of planes. But you're also projecting a value judgement onto what I'm saying that isn't there.

I'm not projecting a value judgment, I'm just asking for clarification because what you're saying doesn't match up with my understanding of computing.

BrandorKP posted:

Maybe I can say this more clearly. A person flying a plane is doing so with a model in that person's head. A computer flying a plane is doing so with a model inside the computer. We people created all the models being discussed.

We did not create all the models being discussed. The google AI that learned to walk created the model it uses to walk, the engineers didn't tell it to walk a specific way they just gave it muscles, joints and a reward system that emphasized movement in a specific direction.

BrandorKP posted:

There isn't anything special about the ones that exist in our brains. The distinction bwtween the two is a false category. Mental models and automation are both just technology.

There's a huge difference between a computer generated model and one someone put together in their mind furthermore neither model has to correlate with the other even if their objective is the same. Not really sure what you're trying to say about mental models and automation being technology, it doesn't mean anything to me and it certainly doesn't say anything about our discussion.

BrandorKP posted:

I've seen it in loading computers. Computer dependant mates are dumb. I written about it in other threads damned if i can find it now. Sometimes having a model in the computer let's a less competent person get away with out having the appropriate models in thier head.

I'm just not understanding what you're trying to say here, yes a model that's been programmed into a computer can be useful as a guide or assistant to a human operator that doesn't have the model in their head, but what does that have to do with how an AI would generate a model?

ElCondemn fucked around with this message at 04:38 on Aug 14, 2017

Bar Ran Dun
Jan 22, 2006




ElCondemn posted:

We did not create all the models being discussed. The google AI that learned to walk created the model it uses to walk, the engineers didn't tell it to walk a specific way they just gave it muscles, joints and a reward system that emphasized movement in a specific direction.

Right they set the constraints, assumptions and objectives... maybe I said literally that already? We we aren't doing those things you can tell me we aren't creating these things.

ElCondemn posted:

There's a huge difference between a computer generated model and one someone put together in their mind furthermore neither model has to correlate with the other even if their objective is the same. Not really sure what you're trying to say about mental models and automation being technology, it doesn't mean anything to me and it certainly doesn't say anything about our discussion.


Not really both are technology we use to make poo poo we do easier.

ElCondemn posted:

I'm just not understanding what you're trying to say here, yes a model that's been programmed into a computer can be useful as a guide or assistant to a human operator that doesn't have the model in their head, but what does that have to do with how an AI would generate a model?

It doesn't anything to do with how an AI would generate a model and I'm not sure why you think it does?

You're not getting a concept. A system of equations in a persons head, a nomigraph, a calculator and stability booklet, an automated loading computer, all those things are in the same category, that might be used to solve one problem. Sometimes when one chooses to use one tool instead of another there are tradeoffs.

ElCondemn
Aug 7, 2005


BrandorKP posted:

Right they set the constraints, assumptions and objectives... maybe I said literally that already? We we aren't doing those things you can tell me we aren't creating these things.

I guess the problem I'm having is that you're saying there's another way to do it but I'm not understanding what you expect. What is the alternative that you seem to be talking about?

BrandorKP posted:

Not really both are technology we use to make poo poo we do easier.

Ideas are not technology, I'm really not getting whatever concept you're trying to explain.

BrandorKP posted:

It doesn't anything to do with how an AI would generate a model and I'm not sure why you think it does?

I didn't say it did, I'm asking you why you're bringing it up in the context of the conversation happening right now.

BrandorKP posted:

You're not getting a concept. A system of equations in a persons head, a nomigraph, a calculator and stability booklet, an automated loading computer, all those things are in the same category, that might be used to solve one problem. Sometimes when one chooses to use one tool instead of another there are tradeoffs.

I really am not getting whatever concept you're trying to explain, I think it's probably best if I just stop responding because your answers are getting more cryptic and nonsensical to me. I just don't understand what you're getting at in any way.

SnowblindFatal
Jan 7, 2011

BrandorKP posted:

Right they set the constraints, assumptions and objectives... maybe I said literally that already? We we aren't doing those things you can tell me we aren't creating these things.

Your mom did the same things to you and look how well you turned out to be! :) A little computer buddy will be even better because it doesn't have human emotions loving up cognitive capability.

Solkanar512
Dec 28, 2006

by the sex ghost

ElCondemn posted:

So what? The Boeing's and Airbus's of the world constantly design and implement new technologies, they haven't been sitting around doing things the same way for ages, the industry is more automated and computerized than ever. I think they understand what needs to be done to approve changes, especially if it would save them money.

Don't be so patronizing, I have direct experience in this.

935 posted:

There aren't any practical limitations in TYOOL 2017. You could store the coordinates of every plane in the air in the world in a single textfile on one hard drive.

http://www.telegraph.co.uk/travel/travel-truths/how-many-planes-are-there-in-the-world/amp/

39,000 planes in the world. Let's say they're all flying at once, and once every second they transmit coordinates.

Let's be generous and say 100 bytes per plane per minute to broadcast a couple numbers. So, 4 megabytes a minute for GPS coordinates for every plane in the entire world. My laptop could store data for three days before running out of space, for every plane in the world. There isn't a technical limitation anymore.

The issue wasn't data storage, it's transmission from airplanes anywhere on earth to a backup location. If there's such bandwidth available then I don't see a problem. What I want people to do is actually take into account technical limitations before just whining about poo poo that's equivalent to "well why isn't the whole plane made out of a black box". If it's already available then fine, more data is good.

Doctor Malaver posted:

Thanks, I didn't consider that.

This is terrible advice by the way. You can't pull off the side of the road when a plane malfunctions in flight.

quote:

They have been doing that - addressing difficulties and risks which you bring up. You haven't been very good at it, though, because your posting is mostly irritated demands for respect.

I'm not going to apologize for wanting people to actually learn about poo poo before they believe that they can come in and disrupt it. Our safety record is incredible and I'm not going to let a bunch of amateurs think that it's a good idea to go around screwing that up without a whole lot of testing and data showing actual improvement. If we gently caress up people die. If you don't work in an industry where this is the case you won't understand how seriously this is taken and the measures that are taken to ensure that people are safe. This isn't a judgement on you, it's just a simple fact.

quote:

What you might be getting wrong is that when IT people say that replacing pilots would be easy, they primarily think about the software aspect of the problem. Can AI do it in the immediate future or not? Apparently Yes - at least I haven't seen any good arguments against. It might not happen soon because of public reactions, union agreements, legislature, adapting planes, changes to flight control, etc... But it's just a matter for the industry to gauge whether it would be worth it in 10 years or 30.

Folks need to understand that it's more than just arbitrary laws that are holding things back. Like I said before, these regulations are written in blood. Lack of training or specific types of training have been shown to cause incidents. Confusion between different types of systems have caused incidents. This is why I was so loving hard on OOCC about the training issue - these regulations didn't spring from the rear end of a freshman member of Congress, they came from things like NTSB reports.

ElCondemn posted:

So what do you expect a pilot in the cockpit to be able to do if there's a problem with the computer/flight controls? Can you explain to me the point at which an autopilot would be considered good enough? And is that point achieved by having a human pilot?

Attempt to compensate for the mechanical malfunctions on the fly and attempt to land the plane in a safe manner. If not that then prepare the crew and cabin for a crash landing and attempt to minimize the potential loss of life in the plane and on the ground. When these things happen they usually happen because of a situation that wasn't expected.

The other thing that folks seem to be missing here is that malfunctions don't have to be complete - they can be partial. Also, it's pretty standard to have the pilot fly the plane manually if there are problems. You don't want the computer flying things if their sensors are receiving bad or no data.

ElCondemn
Aug 7, 2005


Solkanar512 posted:

Don't be so patronizing, I have direct experience in this.

Pot calling the kettle black much?

Solkanar512 posted:

Attempt to compensate for the mechanical malfunctions on the fly and attempt to land the plane in a safe manner. If not that then prepare the crew and cabin for a crash landing and attempt to minimize the potential loss of life in the plane and on the ground. When these things happen they usually happen because of a situation that wasn't expected.

The other thing that folks seem to be missing here is that malfunctions don't have to be complete - they can be partial. Also, it's pretty standard to have the pilot fly the plane manually if there are problems. You don't want the computer flying things if their sensors are receiving bad or no data.

What is the technical reason for why this can't be done with redundancy and automation in both the systems and sensors? Also how often does this happen? What is the ratio of these events that were improved because of a human pilot? You're making claims left and right as if computers can't do or would be worse than a human pilot at what you just described, nothing you've said excludes an automated pilot from doing a better job than a human.

RandomPauI
Nov 24, 2006


Grimey Drawer
AI only would at least solve the problem of Airbus's crashing because their computers put the plane into "alternative law" mode. That mode gives pilots more control over a planes movement. Pilots don't always notice it's on because it only comes on in emergencies and they're preoccupied with the same emergency.

GEMorris
Aug 28, 2002

Glory To the Order!

ElCondemn posted:

Pot calling the kettle black much?


You have direct experience at a valley company selling the hopes and dreams of never having to deal with or interact with labor again to capitalists. You obviously know little to nothing about aviation and aviation safety. You are the perfect example of the hubris of purely technical actors. If it was up to you a few crashed planes in the testing phase would be an acceptable result.

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?

Solkanar512 posted:

The other thing that folks seem to be missing here is that malfunctions don't have to be complete - they can be partial. Also, it's pretty standard to have the pilot fly the plane manually if there are problems. You don't want the computer flying things if their sensors are receiving bad or no data.

You're essentially saying that you wouldn't want a pilot to fly the plane if he suddenly went blind, which is obviously true. Why should I be more concerned about sensors failing than I should be about the human crew somehow becoming disabled? Especially when you can have far more backup sensors than you can have backup pilots on any given flight.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

GEMorris posted:

You have direct experience at a valley company selling the hopes and dreams of never having to deal with or interact with labor again to capitalists. You obviously know little to nothing about aviation and aviation safety. You are the perfect example of the hubris of purely technical actors. If it was up to you a few crashed planes in the testing phase would be an acceptable result.

Literally every part of a plane is already made of technology. How do you know the hydraulics aren't made by a bro? Do you just inherently know that the company that makes the wings is a bunch of upstanding dependable real man's men that we can all trust and then trust the company that verifies the product but if they made autopilot it'd be made by one of those.... zuckerburgs?

SeaWolf
Mar 7, 2008
You know why aircraft automation is absolutely going to happen? Because the military is going to drive it when they commit to wanting to remove human actors from the control of their warplanes. Hundreds of billions of dollars will be spent on this. Hell, research and development has already been well underway for several years now. All the aerospace contractors who have one foot in the military pool, and the other in the civil aviation market are going to have their hands very much deeply involved in these sorts of projects and they are unquestionably going to make their way into the civilian market once it reaches maturity and approval from the appropriate regulatory agencies is granted.

You can brush it off by claiming it's too complicated, there's too many variables, everything is written in blood, etc... but in the end it's just another form of tecnophobia.

I'm not saying this is going to happen soon, it very well may not be in my lifetime, but it's going to happen. When the successor to say the X-47 project demonstrates even more autonomous operating capabilities we'll be just that one step closer.

The regulators, the aerospace R&D houses, and the software engineers are all going to be working together in concert to make this happen in such a way that will preserve and enhance your regulations written in blood. This isn't going to be some fly by night SV startup with a bunch of idealistic coders sitting in a garage looking for a way to disrupt flight. But it is going to happen.

call to action
Jun 10, 2016

by FactsAreUseless
I'm pretty sure it's not going to take 50 years before we can automate planes.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

call to action posted:

I'm pretty sure it's not going to take 50 years before we can automate planes.

https://en.m.wikipedia.org/wiki/The_Spirit_of_Butts%27_Farm

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?

SeaWolf posted:

You know why aircraft automation is absolutely going to happen? Because the military is going to drive it when they commit to wanting to remove human actors from the control of their warplanes. Hundreds of billions of dollars will be spent on this. Hell, research and development has already been well underway for several years now. All the aerospace contractors who have one foot in the military pool, and the other in the civil aviation market are going to have their hands very much deeply involved in these sorts of projects and they are unquestionably going to make their way into the civilian market once it reaches maturity and approval from the appropriate regulatory agencies is granted.
It's well under way, it seems. The USAF Global Hawk drones are entirely automated, and the USAF is upgrading its Reapers to Army standard automated takeoffs and landings: https://www.flightglobal.com/news/articles/usaf-to-automate-mq-9-takeoffs-and-landings-424975/

Another tick in the Global Hawk's favour is that it costs almost as much as an F-35.

ColoradoCleric
Dec 26, 2012

by FactsAreUseless

SimonCat posted:

100 round magazines are garbage and very jam prone. You'd be better off with a standard sized 30 round magazine.

30 rounds is a death machine only meant for killing babies and puppies but reloading 10 round magazines is ok.

Bar Ran Dun
Jan 22, 2006




ElCondemn posted:

Ideas are not technology, I'm really not getting whatever concept you're trying to explain.

This is the thing you are not getting. Any physical technology needs a corresponding conceptual technology to be useful. Sometime you get the idea that's a tool first, sometimes the physical tool. They're both technology.

One has to think about both. A new widget that does a new thing is nothing without a use case inside a larger system. A physical technology implies systems in which it is used (and occasionally that goes on the other direction too).

Technology can be knowledge, or a system, or even an ideology.

ThisIsJohnWayne
Feb 23, 2007
Ooo! Look at me! NO DON'T LOOK AT ME!



Or it could be the most pretentious way to say you found another use for a thing. "Hey Bob! I found a new use forinvented a disruptive use case of the hammer! Turns out, you can hit people with it too!"

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Solkanar512 posted:

I'm not going to apologize for wanting people to actually learn about poo poo before they believe that they can come in and disrupt it. Our safety record is incredible and I'm not going to let a bunch of amateurs think that it's a good idea to go around screwing that up without a whole lot of testing and data showing actual improvement.

You're fighting a strawman, more precisely OOCC in a strawman form coming to the nearest airport to replace pilots with an exe file he put together over the weekend.

It should be obvious that these changes will be gradual, will include enormous amounts of testing, will also include all the applicable industry best practices, and will be monitored and led by people who have much more experience in the field than you have.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Doctor Malaver posted:

You're fighting a strawman, more precisely OOCC in a strawman form coming to the nearest airport to replace pilots with an exe file he put together over the weekend.

No, my proposal was very precise and serious and the real flaw with my plan is that in 50 years the airline industry is going to hit a wall as it gets hard to procure enough working nintendo switches to implement the exact protocol I described in that one post they made the sole directing guidance for their transition away from human pilots.

1337JiveTurkey
Feb 17, 2005

Appendix F to the Rodgers Commission Report seems relevant here. These threads almost always degenerate into a comparison between stereotypes of serious engineers and feckless programmers and the Challenger disaster is an example of that not being the case. The software worked flawlessly but the shuttle was lost due to engineers assigning a margin of safety to load-bearing drywall.

Bar Ran Dun
Jan 22, 2006




ThisIsJohnWayne posted:

Or it could be the most pretentious way to say you found another use for a thing. "Hey Bob! I found a new use forinvented a disruptive use case of the hammer! Turns out, you can hit people with it too!"

I'm sparing you the pretentious stuff. I've got some "context of the firm" diagrams from grad school that are hooo boy.

I'm also not saying something controversial or extraordinary. It's neither of those things to say ideas can be technology.

SnowblindFatal
Jan 7, 2011

BrandorKP posted:

I'm sparing you the pretentious stuff. I've got some "context of the firm" diagrams from grad school that are hooo boy.
...that you also need to post.

Solkanar512
Dec 28, 2006

by the sex ghost

ElCondemn posted:

Pot calling the kettle black much?


What is the technical reason for why this can't be done with redundancy and automation in both the systems and sensors? Also how often does this happen? What is the ratio of these events that were improved because of a human pilot? You're making claims left and right as if computers can't do or would be worse than a human pilot at what you just described, nothing you've said excludes an automated pilot from doing a better job than a human.

How often does what happen? I'm not sure of your antecedent here.

No one is saying that this can't be done with redundancy or automation, we already have this. One example are the redundant hydraulic lines that ensure you can still operate flaps and ailerons and so forth. Those used to be routed through the same parts of the plane, but once there were a few engine failures that led to shrapnel cutting those lines, they started being routed through different areas. With sensors you need to be more careful and ensure you have good ways to deal with conflicting information. There are generally three artificial horizons (pilot/copilot/center) that are controlled separately and pilots can switch their own to mirror another if their own fails for some reason. These are improvements that come straight out of the NTSB and related organizations.

Those can only serve to deal with the situations that we can think up. Human pilots are great because they add a very good layer of redundancy and are able to deal with novel issues as they arise. The walking AI think is an interesting example, but you're not going to have the time to deal with that at 40,000 ft with an emergency dealing with unforeseen issues that won't be fully understood until after a year or two of study.

If you want that automated pilot so badly, you need to show that it improves upon the current safety record we have now. On American carriers, we're talking about incidents measured in the single digits per millions of flights and no deaths for seven years straight.

Paradoxish posted:

You're essentially saying that you wouldn't want a pilot to fly the plane if he suddenly went blind, which is obviously true. Why should I be more concerned about sensors failing than I should be about the human crew somehow becoming disabled? Especially when you can have far more backup sensors than you can have backup pilots on any given flight.

Well for one thing, we don't have evidence that pilots becoming disabled during flight is a reoccurring, contributing cause to modern crashes. We already have strict health and age requirements, we have copilots, and longer flights even have multiple crews. You may also encounter systemic issues with the backup equipment that ensures all backups will be useless. That's why when it comes to redundancy you want multiple different systems as much as possible, and pilots are one of these systems.

Doctor Malaver posted:

It should be obvious that these changes will be gradual, will include enormous amounts of testing, will also include all the applicable industry best practices, and will be monitored and led by people who have much more experience in the field than you have.

You'd think it would be obvious yet every time this issue comes up here or elsewhere folks come crawling out of the woodwork with half-baked ideas without any consideration for how changes can lead to increased danger. I'm not sure why you felt the need to add the personal dig though.

SeaWolf posted:

You can brush it off by claiming it's too complicated, there's too many variables, everything is written in blood, etc... but in the end it's just another form of tecnophobia.

Just remember a few things. Military flight and civilian flight are covered under completely different standards of safety, operating philosophies and so on. The second is that point out that things are complicated is not the same thing as being a technophobe. I don't doubt that more automation is on the way, I just doubt those that think it's a trivial matter to just dump the pilots. It takes time and effort to do this correctly and folks need to take that into consideration. When I see folks like Amazon petition the FAA for exclusion from experimental flight rules (2014 I believe)* because "they're making new parts too quickly" and "they have an astronaut on their team", it comes across like they don't know what they gently caress they're doing. Luckily they've improved since then. Finally, there's a difference in the amount of safety we expect in vehicles that carry humans versus vehicles that don't.

*To be very, very general, the number and depth of FAA conformity inspections (inspections to ensure that the aircraft is being built according to the engineering plans and that the build is consistent) are based on the experience of manufacturer/supplier and how new the part is, in terms of materials, engineering, flight hours, that sort of thing. The first 787 is going to receive a whole lot more scrutiny than an upgraded 737 that has been flying in some form or another for decades. So when Amazon comes around wanting to skip all that with no prior manufacturing experience, the whole letter looks like a complete loving joke.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Solkanar512 posted:

Those can only serve to deal with the situations that we can think up. Human pilots are great because they add a very good layer of redundancy and are able to deal with novel issues as they arise. The walking AI think is an interesting example, but you're not going to have the time to deal with that at 40,000 ft with an emergency dealing with unforeseen issues that won't be fully understood until after a year or two of study.

Is that a real thing that really happens though? Are there more than a teeny tiny handful of pilots saving the day with really truely out of the box solutions to issues and not just "instead of pressing THIS button I pressed THAT button" that would be 100% doable by a machine? like physical mcguyvering? You don't even need it to be a robot man that makes up a solution, the ineffable human souls can pre-load "hey, what do you do if these parts fail" modes ahead of time and build hierarchies of "if this breaks, this other part can replace it's function if you use it like this" tables.

Adbot
ADBOT LOVES YOU

Taffer
Oct 15, 2010


Solkanar512 posted:

You'd think it would be obvious yet every time this issue comes up here or elsewhere folks come crawling out of the woodwork with half-baked ideas without any consideration for how changes can lead to increased danger.

This is the issue in this conversation. You are expecting people to come up with thorough, robust, and informed proposals detailing how automation should be implemented. You're having this argument on a dead gay comedy forum. Nobody is qualified or interested enough to come up with those proposals. They are half-baked because why the hell would anyone here try to make a full-baked idea?

Someone says "hey this thing could be automated, we should be doing that" and you say "THATS DANGEROUS" because you can't grasp that it was never meant to be a complete proposal.

Yes, airplanes can be automated. They already are to a massive extent. Your claims that automation are impossible or too dangerous are short-sighted and ignore basically every example of automation of an industry. People inside bloated industries with a vested interest in the status quo always say that change is not possible, until someone ignores their whining and does it. AI can easily do all of the things that have been discussed in this thread, despite your silly assertions that humans are able to intuit and work around serious technical failures even when they don't understand what they are (this is absurd). First it was chess, then it was cars, then it was medicine, surgery, airflight, go, you name it. There is nothing special about airplanes that says they can't be automated. And OF COURSE the process towards automation should and absolutely WILL take every possible safety precaution into account. Nobody even once in this thread has said otherwise and you are acting like every person doesn't care about safety.

  • Locked thread