Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
echinopsis
Apr 13, 2004

by Fluffdaddy

Shame Boy posted:

y'all just reminded of some stupid new feature design we got handed at work a while ago. basically we need to let the user select how many of something they want, using a touch screen. the max is 10, and an overwhelming majority of people will only ever want 1 and rarely 2 of this thing. so i'm thinking something like, number box with up and down arrows, with 1 already set, right? we get the designs (that have already been approved by the customer, of course) and it's a loving pin pad, with logic so if you enter any number all the other numbers are disabled (since you can only enter the single digits of 2-9), except if you enter 1 then all the numbers except zero are disabled, so you can enter 10. this is of course in a separate modal that pops up instead of just like, on the page you're already on :wtc:

omg this drives me mad lol

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

should have been a dial that goes all the way to 10 even if there's an 11 on there

Sagebrush
Feb 26, 2012

ERM... Actually I have stellar scores on the surveys, and every year students tell me that my classes are the best ones they’ve ever taken.

Shame Boy posted:

y'all just reminded of some stupid new feature design we got handed at work a while ago. basically we need to let the user select how many of something they want, using a touch screen. the max is 10, and an overwhelming majority of people will only ever want 1 and rarely 2 of this thing. so i'm thinking something like, number box with up and down arrows, with 1 already set, right? we get the designs (that have already been approved by the customer, of course) and it's a loving pin pad, with logic so if you enter any number all the other numbers are disabled (since you can only enter the single digits of 2-9), except if you enter 1 then all the numbers except zero are disabled, so you can enter 10. this is of course in a separate modal that pops up instead of just like, on the page you're already on :wtc:

lomarf. it would take up exactly the same amount of real estate while being significantly less confusing if they just had buttons labeled "1" through "10" and selecting one popped out all the others, like radio buttons.

i'm going to chalk that one up to an extreme case of engineer brain. it just has that feel. "what's a way to describe this in a mathematical function" rather than "what's the human experience of this"

Shame Boy
Mar 2, 2010

Sagebrush posted:

lomarf. it would take up exactly the same amount of real estate while being significantly less confusing if they just had buttons labeled "1" through "10" and selecting one popped out all the others, like radio buttons.

i'm going to chalk that one up to an extreme case of engineer brain. it just has that feel. "what's a way to describe this in a mathematical function" rather than "what's the human experience of this"

i think it started as "this is a number entry field so obviously we should just put the same thing we use for other number entry fields so i don't have to think about this" (we use a pinpad thing sometimes for entering longer numbers) and then "but wait, we'd have to add logic to make it make sense with the range of 1 to 10..." and so on

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Expo70 posted:

i really wish i could just do this kind of research fulltime and just graph and find all the optimal control solutions to given problemtypes. Even fighter cockpits have awful solves for a lot of these problems.
like what, you're gonna do low aspect lofting BVR on a pair of fat MFDs for defence posture/detection response posture with some dumbass helmet that isn't indicating even 1/5th of the poo poo the computer knows because you're still using manual caging to lock up and issue responses? I get that you don't trust your inference detection but at this point you're not even performing manual ID unless you're eyeballing blobs or doing markup passover via AWACs datalink and shrugging shoulders. bleugh

that said, i do sleep a little better knowing imperialist shitheels are at least Δ% less effective in the wild. if they were overconfident and too independent like some of the weird incidents in the 1990's that would be horrific. I'm at least glad they had tomcatters leading 18s and 15s because their better pods at least meant they could properly ID poo poo on bridges and not just blow people away. scary scary stuff.

if i may hawk my own particular nerd interests for a little bit: i think you'd be interested in the ergonomics of the saab viggen, op. a long time ago i translated and posted a seminar transcription in which a bunch of old engineers went over the interface design (and a lot of tangential anecdotes about software engineering in 1970's sweden). starts here (old goldmined coldwar thread). i'd say it's relevant both to this thread and to yospos in general, there's quite a bit of interesting computing history in there as well as a bunch of discussion about project management and so on.

it doesn't go into details about what the cockpit was actually like to work in, but the earlier aj37 variant is pretty faithfully modeled in dcs and has a flight manual in english if you really want to get into it. not sure if i'd recommend that though, there are a lot of very subtle weirdnesses about it that aren't apparent at a glance and just reading the flight manual probably isn't a good way to discover them. like for example most of its weapons require you to input the target's barometric altitude (QFE) and there's a lot of interesting reasons for why, it has a rocket sight that seems like it's ccip at a glance but in reality absolutely isn't because of reasons, and so on. they tried really hard to make things easy for the pilot, as far as the 1960's computer technology would allow.

echinopsis
Apr 13, 2004

by Fluffdaddy
that seems like posting gold for someone who knows what the hell youre talking about


anyway Ive been thinking about the touch screen interfaces at supermarkets


if a dollar of research toward improving user experience was to be spent a significant portion should go toward this. something that im guessing 100 of bilions of peopple use every day. a second saved in that machine is a year or decade of productive time gone to waste

a) it should identify you because it read your user card or phone or wahtever (not visual recognition) and prepare an interface thats appropriate to you
a) if the interface assumed to you is thre incorrect one it should be obvious and easy to pull back to the generic interface that suits anyone equally

tahts all ive got. make it good. save the world

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
doing computer stuff at 2.2% profit margin businesses is basically a brutal exercise in kakistocracy

Shame Boy
Mar 2, 2010

i came across this linked from an NTSB report if anyone's interested, it's basically a big list of "how to make a cockpit correctly" that aggregates a bunch of different rules from the FAA and DoD and data from studies. most of it is just big lists of rules to follow but it's got little summary sections after each big list that describes the reasons why you should do things that way along with examples:

https://rosap.ntl.bts.gov/view/dot/12411

i love how fuckin' detailed it gets, like there's an entire massive section just about fonts. and tbh the font section's actually pretty useful in general, it's got a ton of info on when to use what font, exactly what ratio of height and width to use, how thick the characters should be etc. gonna have to reference it next time i'm working on a thing with a screen.

tk
Dec 10, 2003

Nap Ghost

Sagebrush posted:

lomarf. it would take up exactly the same amount of real estate while being significantly less confusing if they just had buttons labeled "1" through "10" and selecting one popped out all the others, like radio buttons.

i'm going to chalk that one up to an extreme case of engineer brain. it just has that feel. "what's a way to describe this in a mathematical function" rather than "what's the human experience of this"

Yeah but what if next year we want to have 11 buttons! There wouldn’t be space for that! This design is the future and we must implement it now.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

echinopsis posted:

that seems like posting gold for someone who knows what the hell youre talking about


anyway Ive been thinking about the touch screen interfaces at supermarkets


if a dollar of research toward improving user experience was to be spent a significant portion should go toward this. something that im guessing 100 of bilions of peopple use every day. a second saved in that machine is a year or decade of productive time gone to waste

a) it should identify you because it read your user card or phone or wahtever (not visual recognition) and prepare an interface thats appropriate to you
a) if the interface assumed to you is thre incorrect one it should be obvious and easy to pull back to the generic interface that suits anyone equally

tahts all ive got. make it good. save the world

Not all time is productive time. And in general the population's leisure time is nonproductive. Even if it were productive, it may not be productive for the agent in control.

Besides the obvious improvement is to use CV to help identify what vegetables you just placed 90% on the scale. Just throw out the top 2 guesses and an other button that falls back to the current search.

MononcQc
May 29, 2007

I've name-dropped resilience engineering a few times in this thread, and it's one of the least well-defined / most overloaded terms.

Here is an interesting little bit from a novel I read last summer, which had a quick note about the term “resilience.” I’m translating loosely from French:

quote:

Term borrowed from metallurgy, appropriated by pop-science psychiatrists and countless mediocre motivational speakers, resilience is one of the most overloaded words of this era. Synonymous with a capacity to overcome obstacles and to grow despite adversity for the common man, resilience rather points to the quality of materials that can return to their original form after having been hammered, burnt, twisted, or put under some tension.

To apply to humans while respecting the original etymology, we must first abandon all notions of naive optimism. The psychopath who maintains his psychological rigidity while interrogated is resilient, the drug addict who finds himself still tolerant to drug effects after a forced withdrawal is resilient, the soldier who lets himself be be showered by enemy fire in a lost battle is resilient; those who show resignation are more resilient than the optimists. It is therefore not a question of reaching for the higher planes of virtue, but to be unwavering to your true nature. Mother Teresa and Adolf Hitler both represent excellent resilience models.
The novel is Ta mort à moi, and the tone is for sure cynical, but I did enjoy the heavy pessimistic contrast with resilience as used in resilience engineering, and the idea that evolving and adapting is very different from resilience, which is just about returning to your original shape regardless of whether it is good or not.

So how does resilience engineering define resilience? Well that's this week's paper, once again by David Woods, titled Four concepts for resilience and the implications for the future of resilience engineering. The paper opens by admitting that the popularity of the term has led to confusion regarding what it means in the first place. I recall seeing other papers who held the ill-defined term as one of the biggest weakness of a discipline named after it. All the different uses seen around the place have been categorized into 4 groups by Woods: rebounding, robustness, graceful extensibility, and sustained adaptability.

Rebound
Why do some communities, groups, or individuals recover from traumatic disrupting events or repeated stressors better than others to resume previous normal functioning? Most research there asserts that the difference comes from which resources and capabilities were present before the disruptions, not from the what happens when surprised. The paper quotes:

quote:

“the ability to deal with a crisis situation is largely dependent on the structures that have been developed before chaos arrives. The event can in some ways be considered a brutal and abrupt audit: at a moment's notice, everything that was left unprepared becomes a complex problem, and every weakness comes rushing to the forefront”

A second important aspect is that research focusing on rebound cares a lot more about the fact that disruptions are surprises, rather than the nature of each individual disruption's characteristics. The surprise challenges a model and forces revisions into the system.

This creates a weird effect where this structure of research drives towards studying another definition of resilience (graceful extensibility): to deal with disruption, the capability to adapt has to already be there, and considers the resilience as a potential. But you can only measure the potential by validating it across disruptions, which this definition doesn't like focusing on.

In short, a lot of questions about resilience are about why or how organizations rebound, but the research has mostly moved on to study systems where there is an ongoing and continual ability to adapt and adjust.

Robustness
This is generally perceived to be a conflation of resilience with another term -- the ability to absorb disruptions -- robustness. More robustness means your system can tolerate a broader range of disturbances while still responding effectively. Generally, robust control works, and only works, for cases where the disturbances are well-modelled.

So this definition remains sensitive to the question about what happens when the disturbance is outside the scope of what was modelled. The typical failure mode here is one where the system reaches its limits and suddenly collapses. Woods states that brittleness tends to just live at boundaries of robustness. Cybersecurity is an interesting domain here where you can be extremely robust to specific types of threats, but once the attack is novel, using a different approach, everything goes bad.

The naive understanding of robustness is that you can continuously expand the envelope of the stressors you can cope with. In practice, empirical research has shown that it is in fact more often a tradeoff: the things you can handle mean there are other things to which you become more fragile. This, once more, pushes towards the two latter definitions, which focus more on ways to adapt than ways to predict, because that tradeoff is more and more considered to be fundamental and unavoidable (think, for example, of heuristics and limits to attention).

Graceful Extensibility
Graceful extensibility is a sort of play on the idea of graceful degradation. Rather than asking the question how or why do people, systems, organizations bounce back, this line of approach asks: how do systems stretch to handle surprises? Resources are finite, environments changing, and their boundaries shift in ways that requires stretching and elasticity. A tenet here is that without the ability to stretch and adjust, your brittleness is far more severe than expected during normal operations, and generally exposed through extremely rapid collapses.

So a big question is where's the boundary? We never know, incidents define it. There's a rate and tempo to events that let us get a glimpse of what it might be, so they can be looked at, tracked, and exercised. A common challenging scenario is how an organism that deals with "normal" challenges deals with two of them happening at once, for example, because this risks overextending the system.

The idea here is influenced by Safety-I (studying and preventing failures) vs. Safety-II (studying and enhancing successes), such that graceful extensibility can be seen as a positive attribute: how do we create a readiness-to-respond that is a strength and can be leveraged in all sorts of situations, rather than narrowing it to being the avoidance of negative effects?

Contrasted with rebounds, the approach to this is to look at past challenges, and see them as a way to gauge the potential to adapt to new surprises in the future. It also allows the idea of studying sequences and series of rebounds on a longer-term view of the system. How do they succeed and how do they fail?

The idea is that they tend to fail when exhausting their capacity to mobilize response as disturbances grow and cascade, something dubbed decompensation. This tends to be detected when the ability to recover from a crisis takes longer and longer, which is the impending sense of a tipping point or collapse. The positive version of it is the anticipation of bottlenecks and crunches, and being able to deal with them. There are things that can be done to aid this resilience potential, but it contains its own challenges, where an organization can hinder its own capacity while trying to improve it.

This leads to the fourth definition...

Sustained Adaptability
This refers to the ability manage/regulate adaptive capacities of systems. In short, while the past can be used to calibrate the potential for future resilience, the past is also not predictive and you can hit walls where the capacity is gone. Resilience-as-sustained-adaptability asks 3 questions:
  1. what governance or architectural characteristics explain systems that succeed or fail at sustained adaptation?
  2. what design principles and techniques would allow one to engineer a system that adapts in a sustained manner?
  3. how would you know you're succeeding?
Expected challenges to sociotechnical systems over their life cycle include:
  • surprises will keep challenging boundaries
  • conditions and contexts will keep changing and shifting the boundaries
  • adaptive shortfalls will happen and people will have to step in
  • the factors that provide adaptability and the needs for them will shift over time
  • classes of changes will happen and the system as a whole will need to readjust itself and its relationships
A whole lot of the discipline is therefore interested in all the tradeoffs people make, and that biological systems (or ecosystems) make, and particularly which are fundamental and how they apply to other systems as well.

An agenda of this type of resilience is in managing capacities dedicated to resilience. In this perspective, it makes sense to say a system is resilient, or not, based on how well it balances all the tradeoffs, or not.

-----

Woods states that the yield from the first two types of resilience has been low. The latter two approaches, the most positive ones, tend to provide better lines of inquiries, though the discipline is still young.

MononcQc
May 29, 2007

Since my last post was large and less about ergonomics, you also get a reference to a really cool short paper, The Problem with Checklists, which analyzes the transfer of their use from the airline industry to the healthcare industry (which was reported as a huge success in The Checklist Manifesto) and instead analyzes the way their adoption failed to prove actual improvements in western healthcare, often due to their design.

It's short enough (and with pictures!) that it doesn't need cliff notes.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
If a plane crashes, a bunch of congresspeeps ream the airline c-levels. This, not checklists, is the basic feedback loop that means that air flight is the safest non-elevator powered transportation.

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
Why is the faa unequivocably one of the only functional civilian govt agencies? The faa head shares the dressing-down when a plane crashes too

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

valley brain callin in with the make the entire plane out of black box takes

Cybernetic Vermin
Apr 18, 2005

bob dobbs is dead posted:

Why is the faa unequivocably one of the only functional civilian govt agencies? The faa head shares the dressing-down when a plane crashes too

in general it is a bit of a mystery how the us of all places managed to create a bunch of really good regulatory bodies (since destroyed to some extent). the fda is also up there with the faa in doing a ton of the gruntwork on drug safety for the whole world.

and i say this now living back in europe and convinced that europe is better at most government things than the us.

cinci zoo sniper
Mar 15, 2013




Cybernetic Vermin posted:

in general it is a bit of a mystery how the us of all places managed to create a bunch of really good regulatory bodies (since destroyed to some extent). the fda is also up there with the faa in doing a ton of the gruntwork on drug safety for the whole world.

and i say this now living back in europe and convinced that europe is better at most government things than the us.

hell, ive never lived in the us states proper and i think that we don’t have anything here to compete with fda (the f part specifically), faa, nhtsa, and that own smaller industrial safety thing which is not osha but really cool. in general, americans suck at making a cohesive government, but are quite good at making boutique agencies

Sagebrush
Feb 26, 2012

ERM... Actually I have stellar scores on the surveys, and every year students tell me that my classes are the best ones they’ve ever taken.

MononcQc posted:

Since my last post was large and less about ergonomics, you also get a reference to a really cool short paper, The Problem with Checklists, which analyzes the transfer of their use from the airline industry to the healthcare industry (which was reported as a huge success in The Checklist Manifesto) and instead analyzes the way their adoption failed to prove actual improvements in western healthcare, often due to their design.

It's short enough (and with pictures!) that it doesn't need cliff notes.

This is a good article. ~As a pilot~ I was like "But checklists are great! Why wouldn't they work in healthcare?" and then I got to this part:



quote:

For an Airbus A319 (figure 1), a single laminated gatefold (four sides of normal A4 paper) contains the 13 checklists for normal and emergency operations. Tasks range from 2 (for cabin fire checklist) to 17 (for before take-off checklist), with an average of seven per checklist. Each task is described in no more than three words and can be checked immediately, with usually a single word of confirmation. It has no check boxes, does not require signature and is designed to be used by one person, with specific checklists per-formed aloud.

In contrast, the Centers for Disease Control and Prevention central line-associated blood stream infections checklist has 18 tasks, with no less than 4 word descriptors (and up to 22 words), and describes non-procedural tasks that need to be completed over several minutes (and hours), which cannot be ‘checked’(eg, ‘empower staff ’). The WHO safer surgery checklist (first edition) has 21 tasks (7+7+7), with wording ranging from 2 to 16 per task, and involves several people simultaneously. Some tasks are easily checked and completed, while some require discussion and some cannot be ‘checked.’

Lmaoing @ "empower staff" and requiring a signature on the checklist to proceed. Talk about cargo cult science

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

bob dobbs is dead posted:

If a plane crashes, a bunch of congresspeeps ream the airline c-levels. This, not checklists, is the basic feedback loop that means that air flight is the safest non-elevator powered transportation.

you shame the thread with this awful take

i tried looking for history papers on the origins and development of aviation safety culture as it exists today but didn't really find anything obvious, someone please send he;lp

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
The ntsb's precursor was founded in response to a notre dame football coach dying in a 1931 fokker f-10 crash which led to national outcry and open demands from the president and congress and mass firings and reorganizations of government bureaus

importantly, the purges also banned hiding details of aircraft investigations

the serious political power arrayed in support of airline safety cannot be discounted. that's not a 'take'

compare to the 1951 crash that killed basically the entire soviet military ice hockey team that got no significant organizational responses from the ussr: thats a material part of why aeroflot had more than half the world's aviation fatalities until the 90s

bob dobbs is dead fucked around with this message at 19:14 on Feb 21, 2022

Shaggar
Apr 26, 2006

TheFluff posted:

you shame the thread with this awful take

i tried looking for history papers on the origins and development of aviation safety culture as it exists today but didn't really find anything obvious, someone please send he;lp

imo it seems like mostly a tradition handed down from pilot to pilot rather than anything done by the faa. it probably starts with pilots trained by the us military and spreads out from there. additionally for general aviation theres a high cost of entry which is gonna filter out people doing it on a whim who might not take safety as seriously.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

bob dobbs is dead posted:

The ntsb's precursor was founded in response to a notre dame football coach dying in a 1931 fokker f-10 crash which led to national outcry and open demands from the president and congress and mass firings and reorganizations of government bureaus

importantly, the purges also banned hiding details of aircraft investigations

the serious political power arrayed in support of airline safety cannot be discounted. that's not a 'take'

compare to the 1951 crash that killed basically the entire soviet military ice hockey team that got no significant organizational responses from the ussr: thats a material part of why aeroflot had more than half the world's aviation fatalities until the 90s

you absolutely cannot attribute the better part of a century of institution building to a single event, that isn't how history works

Shaggar posted:

imo it seems like mostly a tradition handed down from pilot to pilot rather than anything done by the faa. it probably starts with pilots trained by the us military and spreads out from there. additionally for general aviation theres a high cost of entry which is gonna filter out people doing it on a whim who might not take safety as seriously.

the us military had an absolutely godawful safety culture until relatively recently so this doesn't ring true to me at all

bob dobbs is dead
Oct 8, 2017

I love peeps
Nap Ghost
its an entire century of congress yelling at people, no joke

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

TheFluff posted:

you absolutely cannot attribute the better part of a century of institution building to a single event, that isn't how history works

the us military had an absolutely godawful safety culture until relatively recently so this doesn't ring true to me at all

bud holland air show practice safety culture

Sagebrush
Feb 26, 2012

ERM... Actually I have stellar scores on the surveys, and every year students tell me that my classes are the best ones they’ve ever taken.

bob dobbs is dead posted:

its an entire century of congress yelling at people, no joke

TheFluff posted:

you absolutely cannot attribute the better part of a century of institution building to a single event, that isn't how history works

it's half and half. bob's thesis that aviation safety only advances through congressmen yelling at CEOs is wrong, but he is partially correct in that basically every rule is written in blood.

for instance, the FAA was created specifically in response to this incident:

https://en.wikipedia.org/wiki/1956_Grand_Canyon_mid-air_collision

but that doesn't mean every development in aviation safety came from a congressional inquiry.

a shitload of them came from WW2 experiences, for instance. e.g. making controls have differently shaped levers so that they are harder to confuse by touch, or orienting the six basic gauges in the same configuration in every plane, or using standardized colors on the airspeed indicator. all can be traced to some fuckup in ww2 where people died.

many more came from pure research into human factors conducted by NASA and other institutions. many of the passive aerodynamic safety characteristics designed into modern planes came from NASA. to this day they maintain a really good aviation safety reporting program that collects more than 100,000 incident reports a year and uses them to issue new safety directives.

some advances came totally at random through personal experiences. jet fuel nozzles are oblong and don't fit round avgas filler tanks; this feature was independently invented by a pilot whose plane was misfueled at an air show, leading to engine failure shortly after takeoff. he survived and came up with a solution.

congress also pulls some really dumb poo poo with aviation safety. back in the day, in order to fly in the airlines, you needed 250 hours to be a first officer (i.e. copilot), and 1500 hours to be a captain. in 2008 a small airliner crashed and 50 people died. i will just quote wikipedia on what happened:

quote:

Following the clearance for final approach, landing gear and flaps (5°) were extended. The flight data recorder (FDR) indicated the airspeed had slowed to 145 knots (269 km/h; 167 mph).[3] The captain then called for the flaps to be increased to 15°. The airspeed continued to slow to 135 knots (250 km/h; 155 mph). Six seconds later, the aircraft's stick shaker activated, warning of an impending stall, as the speed continued to slow to 131 knots (243 km/h; 151 mph). The captain responded by abruptly pulling back on the control column, followed by increasing thrust to 75% power, instead of lowering the nose and applying full power, which was the proper stall-recovery technique. That improper action pitched the nose up even further, increasing both the g-load and the stall speed. The stick pusher activated (The Q400 stick pusher applies an airplane-nose-down control column input to decrease the wing's angle of attack (AOA) after an aerodynamic stall),[3] but the captain overrode the stick pusher and continued pulling back on the control column. The first officer retracted the flaps without consulting the captain, making recovery even more difficult.[24]

In its final moments, the aircraft pitched up 31°, then pitched down 25°, then rolled left 46° and snapped back to the right at 105°. Occupants aboard experienced g-forces estimated at nearly 2 G. The crew made no emergency declaration, as they rapidly lost altitude and crashed into a private home at 6038 Long Street

stall recovery is taught in like lesson 2 of your private pilot certificate. doing what the captain and first officer both did here would be an instant "examiner grabs the controls out of your hands" fail of your checkride and probably would lead to an investigation questioning your instructor's judgment in sending you to the test. the captain had 3379 flight hours and the first officer had 2244.

congress responded to the crash by changing the law so that you now need 1500 hours to be an airline first officer. :thumbsup:

Sagebrush fucked around with this message at 06:17 on Feb 22, 2022

MononcQc
May 29, 2007

Shame Boy posted:

i came across this linked from an NTSB report if anyone's interested, it's basically a big list of "how to make a cockpit correctly" that aggregates a bunch of different rules from the FAA and DoD and data from studies. most of it is just big lists of rules to follow but it's got little summary sections after each big list that describes the reasons why you should do things that way along with examples:

https://rosap.ntl.bts.gov/view/dot/12411

i love how fuckin' detailed it gets, like there's an entire massive section just about fonts. and tbh the font section's actually pretty useful in general, it's got a ton of info on when to use what font, exactly what ratio of height and width to use, how thick the characters should be etc. gonna have to reference it next time i'm working on a thing with a screen.

I was drafting a work blog post today that complains about bad dashboard design and mentioned things discussed in this thread already (eg. there is more signal than attention available, and I guarantee you that most heavy dashboards have lots of metrics willfully ignored or that nobody knows what they mean) and a coworker commented "like when an airplane crashes because the pilot can't hear some alarm" and I was like "oh no, planes actually have well-designed dashboards" and could instantly refer to that document.

so uh yeah thanks for the cool reference.

Sagebrush
Feb 26, 2012

ERM... Actually I have stellar scores on the surveys, and every year students tell me that my classes are the best ones they’ve ever taken.
Tbf to him, one of the many, many factors that led to the 737 MAX crashes was that the alarm system to alert the pilots to a mismatch in angle of attack sensor data was an optional feature. It's not alarm fatigue, but I bet he heard that about the 737s and that's where he was going with it.

How can the system be broken if there's no way to tell? Checkmate.

Sagebrush
Feb 26, 2012

ERM... Actually I have stellar scores on the surveys, and every year students tell me that my classes are the best ones they’ve ever taken.
Related:

https://www.youtube.com/watch?v=2hMn7ZweF6s

That insistent beeping is the alarm that goes off when your manifold pressure is lowered to a descent idle (implying that you're about to land) but the gear is still up. Clearly both of the pilots were just completely accommodated to it.

Shame Boy
Mar 2, 2010

Sagebrush posted:

Tbf to him, one of the many, many factors that led to the 737 MAX crashes was that the alarm system to alert the pilots to a mismatch in angle of attack sensor data was an optional feature. It's not alarm fatigue, but I bet he heard that about the 737s and that's where he was going with it.

How can the system be broken if there's no way to tell? Checkmate.

i recently read one of the preliminary NTSB reports for that whole thing and they did apparently get several (more general) instrument disagree alarms, so i'm not entirely sure that would have helped much more but idk

echinopsis
Apr 13, 2004

by Fluffdaddy
alarms should hurt more clearly

MononcQc
May 29, 2007

Sagebrush posted:

Tbf to him, one of the many, many factors that led to the 737 MAX crashes was that the alarm system to alert the pilots to a mismatch in angle of attack sensor data was an optional feature. It's not alarm fatigue, but I bet he heard that about the 737s and that's where he was going with it.

How can the system be broken if there's no way to tell? Checkmate.

A more specific situation for cockpit automation afaict is something called mode error. The book Behind Human Error introduces it through one of the most fascinating incident reports, the Grounding of the Royal Majesty, where a ship had a disconnected GPS antenna and was (unbeknownst to the crew) fully in dead reckoning mode, and ended up grounding itself after days of being slightly off course.

This is the sort of thing that I believe also describes the 737 MAX better. Here's the Behind Human Error description:



and another, shorter example than the Royal Majesty:



Once again, that book is my bible around incidents :toot:

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



lol at OPEN DESCENT having multiple manual and automatic modes of engaging

big tesla energy

MononcQc
May 29, 2007

Yeah Tesla, as a tech-industry-derived type of approach, does very clumsy automation that is rife with mode errors, but also all the problems pointed out by Lisanne Bainbridge in 1983 in The Ironies of Automation.

Their sort of internal motto they have about "each human input is an error" also lines up with the bad attitude that was pointed out in the papers I reviewed here 2 and 3 weeks ago, where they try to focus the automation in being more and more independent rather than making in cooperative, and necessarily being an unproductive approach. It's the standard sci-fi dream, but generally, decades of cybernetics and human factors research has found time and time again that you run into goal misalignment and an inability to deal with something called the context gap (which I should cover at some point, but essentially means: the automation does not know when its model of the world is incorrect and necessarily needs a human to do it for them because we -- often but not always -- have the capacity to do it), which in turn lead to incidents.

MononcQc
May 29, 2007

New one I had never read before: The 'Problem' with Automation: Inappropriate Feedback and Interaction, not 'Over-Automation' by Don Norman (from the design of everyday things)

To sort of counteract my takes while still aligning with the previous papers, this one (from 1990) made the argument that the problem wasn't that there was too much automation, but that it should be either less powerful, or a lot more powerful. That's mostly because he believes the problem is one of feedback. A caveat here is that my post on Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity is about a paper that's 14 years newer, and therefore more comprehensive. So I would keep in mind reading this Don Norman's one that it specifically tackles a subset of the dynamic -- one of device-to-person communication, but ignores a significant part of the rest of sociotechnical dynamics.

anyway, let's get going with Don Norman:

quote:

the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations under normal operating conditions are performed appropriately, but there is inadequate feed back and interaction with the humans who must control the overall conduct of the task. When the situations exceed the capabilities of the automatic equipment, then the inadequate feedback leads to difficulties for the human controllers.
[...]
[The automation's] level of intelligence is insufficient to provide the continual, appropriate feedback that occurs naturally among human operators. This is the source of the current difficulties. To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate.
[...]
Appropriate design should assume the existence of error, it should continually provide feedback, it should continually interact with operators in an effective manner, and it should allow for the worst of situations. What is needed is a soft, compliant technology, not a rigid, formal one.

One of the basic problems around the feedback is that it used to be very implicit if you were co-located with the room where all the work was done. But automation tends to create distance between the operator and the system. Rather than being around, feeling things, you read numbers and get a large level of indirection. Don Norman says that this mental isolation is one of the main sources of problems.

The argument is made by comparing 3 incidents in airlines:
  • A China Airlines 747 had a loss of power from an engine. The autopilot took over until it could no longer compensate, and suddenly lost 31500 ft of altitude before the crew could regain control. The plane was severely damaged.
  • A pilot who was also the airline owner regularly broke protocol by not responding to the first officer (co-pilot) questions. People working with him felt it was normal. Eventually the plane crashed; it turns out the captain was incapacitated, but the first officer was intimidated and also felt it was normal not to hear from him, so he never properly corrected or took over during a landing where the glidepath was too steep
  • A plane's gas tank was reporting incongruent numbers. The flight engineer told about it, was then told to go check it out. While he was out, the captain noticed the wheel was cocked to the right (a sign the autopilot was compensating), told the first officer to disengage it, and it showed the out-of-balance condition. The flight engineer came back and said there was a visible gas leak and the plane was 2000lb out of balance.

The general idea is that the automation was not the problem; the differentiating factor was the level of feedback:

quote:

In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other. But in the first situation, if problems occur, the autopilot will compensate and the crew will notice only by chance (as in the case study of the fuel leak). When automatic devices compensate for problems silently and efficiently, the crew is 'out of the loop', so that when failure of the compensatory equipment finally occurs, they are not in any position to respond immediately and appropriately.

In the case of the second thought experiment where the control was turned over to the first officer, we would expect the first officer to be in continual interaction with the captain. Consider how this would have worked in the case studies of the loss of engine power or the fuel leak. In either case, the problem would almost definitely had been detected much earlier in the flight.
[...]
By reporting upon observations and possible discrepancies, each crew member keeps the rest informed and alerted—keeping everyone 'in the loop'.
[...]
The culprit is not actually automation, but rather the lack of feedback. The informal chatter that normally accompanies an experienced, socialized crew tends to keep everyone informed of the complete state of the system, allowing for the early detection of anomalies

So what are the solutions proposed? Generally, feedback is missing more than it is too present (at the time); it's also essential to learn and and know whether commands sent were effective. So the idea is more feedback. But there's another problem: we don't know how to actually give that feedback properly. Don Norman specifies that in all cases above, information was available, just not processed properly. The feedback was present but not getting the attention required.

And usually the only way we know of grabbing attention is more alarms:

quote:

The task of presenting feedback in an appropriate way is not easy to do. Indeed, we do not yet know how to do it. We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems. The proliferation of these alarms and the general unreliability of these single-threshold events causes much difficulty.

What is needed is continual feedback about the state of the system, in a normal natural way, much in the manner that human participants in a joint problem-solving activity will discuss the issues among themselves. This means designing systems that are informative, yet non-intrusive, so the interactions are done normally and continually, where the amount and form of feedback adapts to the interactive style of the participants and the nature of the problem. We do not yet know how to do this with automatic devices: current attempts tend to irritate as much as they inform, either failing to present enough information or presenting so much that it becomes an irritant: a nagging, 'back-seat driver', second-guessing all actions.

And so we get closer to issues of AI: to modulate the way such a system interacts with people in a joint activity, the automation needs to understand the meaningfulness of changes according to ongoing goals and objectives. It has to be aware of its own abilities and limits. And we do not know how to do that. Don Norman concludes that the problem isn't that automation is too powerful, but that it isn't powerful enough. I would personally like to augment this with Klein/Woods (the joint activity paper I linked above) which rather re-frame the idea that automation shouldn't be more independent, but a better teammate.

A final word of warning from Don Norman:

quote:

Today, in the absence of perfect automation an appropriate design should assume the existence of error, it should continually provide feedback, it should continually interact with operators in an appropriate manner, and it should have a design appropriate for the worst of situations. What is needed is a soft, compliant technology, not a rigid, formal one.

MononcQc
May 29, 2007

posting this because I used it at work today

tk
Dec 10, 2003

Nap Ghost

MononcQc posted:

posting this because I used it at work today



Did you reinvent The Ribbon?

MononcQc
May 29, 2007

mostly it was the idea that dashboard metrics have to be chosen carefully because during an outage people have less bandwidth, not more, so you have to pick a restricted set of values that are likely to generally be useful to provide vitals (which are interpreted based on current context) rather than trying to add tons of metrics that provide their own context to an ongoing incident.

Someone during the discussion did mention something like the ribbon, but the difference is that the ribbon idea is that "everyone is a slightly different power user" whereas this study is about "everyone using the system under high pressure restricted their use of the device's capabilities and stuck with familiar paths to require less mental bandwidth"

Both result in users vastly under-using the system's capabilities, but for very different reasons.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

MononcQc posted:

mostly it was the idea that dashboard metrics have to be chosen carefully because during an outage people have less bandwidth, not more, so you have to pick a restricted set of values that are likely to generally be useful to provide vitals (which are interpreted based on current context) rather than trying to add tons of metrics that provide their own context to an ongoing incident.

Someone during the discussion did mention something like the ribbon, but the difference is that the ribbon idea is that "everyone is a slightly different power user" whereas this study is about "everyone using the system under high pressure restricted their use of the device's capabilities and stuck with familiar paths to require less mental bandwidth"

Both result in users vastly under-using the system's capabilities, but for very different reasons.

this is a great argument for producing simpler, less capable systems.

MononcQc
May 29, 2007

somewhat yeah. The idea is really that the tech has to get out of the way. Everyone buys and ships checklists of features and people adjust to them haphazardly, but real high-performance poo poo used in the heat of the moment needs to have that ability to be useful while not benefiting from your full undivided attention. It should be there to support you and augment you, not for you to keep configuring it.

One paper I love probably states it best through its title: "I want to treat the patient, not the alarm" (it's Dr. Karen Raymer's thesis, with the less entertaining subtitle: User image mismatch in Anesthesia alarm design).

MononcQc fucked around with this message at 04:13 on Mar 1, 2022

Adbot
ADBOT LOVES YOU

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

MononcQc posted:

One paper I love probably states it best through its title: "I want to treat the patient, not the alarm"

its the same for any toolmaker. Fundamentally you need to realize that the tools you make are not center stage, the things they're using the tools to make are.. Toolmakers who haven't internalized this deep wisdom consistently make bad tools.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply