|
Shame Boy posted:y'all just reminded of some stupid new feature design we got handed at work a while ago. basically we need to let the user select how many of something they want, using a touch screen. the max is 10, and an overwhelming majority of people will only ever want 1 and rarely 2 of this thing. so i'm thinking something like, number box with up and down arrows, with 1 already set, right? we get the designs (that have already been approved by the customer, of course) and it's a loving pin pad, with logic so if you enter any number all the other numbers are disabled (since you can only enter the single digits of 2-9), except if you enter 1 then all the numbers except zero are disabled, so you can enter 10. this is of course in a separate modal that pops up instead of just like, on the page you're already on omg this drives me mad lol
|
# ? Feb 16, 2022 02:57 |
|
|
# ? Jun 8, 2024 00:20 |
|
should have been a dial that goes all the way to 10 even if there's an 11 on there
|
# ? Feb 16, 2022 03:02 |
|
Shame Boy posted:y'all just reminded of some stupid new feature design we got handed at work a while ago. basically we need to let the user select how many of something they want, using a touch screen. the max is 10, and an overwhelming majority of people will only ever want 1 and rarely 2 of this thing. so i'm thinking something like, number box with up and down arrows, with 1 already set, right? we get the designs (that have already been approved by the customer, of course) and it's a loving pin pad, with logic so if you enter any number all the other numbers are disabled (since you can only enter the single digits of 2-9), except if you enter 1 then all the numbers except zero are disabled, so you can enter 10. this is of course in a separate modal that pops up instead of just like, on the page you're already on lomarf. it would take up exactly the same amount of real estate while being significantly less confusing if they just had buttons labeled "1" through "10" and selecting one popped out all the others, like radio buttons. i'm going to chalk that one up to an extreme case of engineer brain. it just has that feel. "what's a way to describe this in a mathematical function" rather than "what's the human experience of this"
|
# ? Feb 16, 2022 03:08 |
|
Sagebrush posted:lomarf. it would take up exactly the same amount of real estate while being significantly less confusing if they just had buttons labeled "1" through "10" and selecting one popped out all the others, like radio buttons. i think it started as "this is a number entry field so obviously we should just put the same thing we use for other number entry fields so i don't have to think about this" (we use a pinpad thing sometimes for entering longer numbers) and then "but wait, we'd have to add logic to make it make sense with the range of 1 to 10..." and so on
|
# ? Feb 16, 2022 03:14 |
|
Expo70 posted:i really wish i could just do this kind of research fulltime and just graph and find all the optimal control solutions to given problemtypes. Even fighter cockpits have awful solves for a lot of these problems. if i may hawk my own particular nerd interests for a little bit: i think you'd be interested in the ergonomics of the saab viggen, op. a long time ago i translated and posted a seminar transcription in which a bunch of old engineers went over the interface design (and a lot of tangential anecdotes about software engineering in 1970's sweden). starts here (old goldmined coldwar thread). i'd say it's relevant both to this thread and to yospos in general, there's quite a bit of interesting computing history in there as well as a bunch of discussion about project management and so on. it doesn't go into details about what the cockpit was actually like to work in, but the earlier aj37 variant is pretty faithfully modeled in dcs and has a flight manual in english if you really want to get into it. not sure if i'd recommend that though, there are a lot of very subtle weirdnesses about it that aren't apparent at a glance and just reading the flight manual probably isn't a good way to discover them. like for example most of its weapons require you to input the target's barometric altitude (QFE) and there's a lot of interesting reasons for why, it has a rocket sight that seems like it's ccip at a glance but in reality absolutely isn't because of reasons, and so on. they tried really hard to make things easy for the pilot, as far as the 1960's computer technology would allow.
|
# ? Feb 16, 2022 06:54 |
|
that seems like posting gold for someone who knows what the hell youre talking about anyway Ive been thinking about the touch screen interfaces at supermarkets if a dollar of research toward improving user experience was to be spent a significant portion should go toward this. something that im guessing 100 of bilions of peopple use every day. a second saved in that machine is a year or decade of productive time gone to waste a) it should identify you because it read your user card or phone or wahtever (not visual recognition) and prepare an interface thats appropriate to you a) if the interface assumed to you is thre incorrect one it should be obvious and easy to pull back to the generic interface that suits anyone equally tahts all ive got. make it good. save the world
|
# ? Feb 16, 2022 09:12 |
|
doing computer stuff at 2.2% profit margin businesses is basically a brutal exercise in kakistocracy
|
# ? Feb 16, 2022 10:43 |
|
i came across this linked from an NTSB report if anyone's interested, it's basically a big list of "how to make a cockpit correctly" that aggregates a bunch of different rules from the FAA and DoD and data from studies. most of it is just big lists of rules to follow but it's got little summary sections after each big list that describes the reasons why you should do things that way along with examples: https://rosap.ntl.bts.gov/view/dot/12411 i love how fuckin' detailed it gets, like there's an entire massive section just about fonts. and tbh the font section's actually pretty useful in general, it's got a ton of info on when to use what font, exactly what ratio of height and width to use, how thick the characters should be etc. gonna have to reference it next time i'm working on a thing with a screen.
|
# ? Feb 16, 2022 15:09 |
|
Sagebrush posted:lomarf. it would take up exactly the same amount of real estate while being significantly less confusing if they just had buttons labeled "1" through "10" and selecting one popped out all the others, like radio buttons. Yeah but what if next year we want to have 11 buttons! There wouldn’t be space for that! This design is the future and we must implement it now.
|
# ? Feb 16, 2022 15:32 |
|
echinopsis posted:that seems like posting gold for someone who knows what the hell youre talking about Not all time is productive time. And in general the population's leisure time is nonproductive. Even if it were productive, it may not be productive for the agent in control. Besides the obvious improvement is to use CV to help identify what vegetables you just placed 90% on the scale. Just throw out the top 2 guesses and an other button that falls back to the current search.
|
# ? Feb 16, 2022 15:59 |
|
I've name-dropped resilience engineering a few times in this thread, and it's one of the least well-defined / most overloaded terms. Here is an interesting little bit from a novel I read last summer, which had a quick note about the term “resilience.” I’m translating loosely from French: quote:Term borrowed from metallurgy, appropriated by pop-science psychiatrists and countless mediocre motivational speakers, resilience is one of the most overloaded words of this era. Synonymous with a capacity to overcome obstacles and to grow despite adversity for the common man, resilience rather points to the quality of materials that can return to their original form after having been hammered, burnt, twisted, or put under some tension. So how does resilience engineering define resilience? Well that's this week's paper, once again by David Woods, titled Four concepts for resilience and the implications for the future of resilience engineering. The paper opens by admitting that the popularity of the term has led to confusion regarding what it means in the first place. I recall seeing other papers who held the ill-defined term as one of the biggest weakness of a discipline named after it. All the different uses seen around the place have been categorized into 4 groups by Woods: rebounding, robustness, graceful extensibility, and sustained adaptability. Rebound Why do some communities, groups, or individuals recover from traumatic disrupting events or repeated stressors better than others to resume previous normal functioning? Most research there asserts that the difference comes from which resources and capabilities were present before the disruptions, not from the what happens when surprised. The paper quotes: quote:“the ability to deal with a crisis situation is largely dependent on the structures that have been developed before chaos arrives. The event can in some ways be considered a brutal and abrupt audit: at a moment's notice, everything that was left unprepared becomes a complex problem, and every weakness comes rushing to the forefront” A second important aspect is that research focusing on rebound cares a lot more about the fact that disruptions are surprises, rather than the nature of each individual disruption's characteristics. The surprise challenges a model and forces revisions into the system. This creates a weird effect where this structure of research drives towards studying another definition of resilience (graceful extensibility): to deal with disruption, the capability to adapt has to already be there, and considers the resilience as a potential. But you can only measure the potential by validating it across disruptions, which this definition doesn't like focusing on. In short, a lot of questions about resilience are about why or how organizations rebound, but the research has mostly moved on to study systems where there is an ongoing and continual ability to adapt and adjust. Robustness This is generally perceived to be a conflation of resilience with another term -- the ability to absorb disruptions -- robustness. More robustness means your system can tolerate a broader range of disturbances while still responding effectively. Generally, robust control works, and only works, for cases where the disturbances are well-modelled. So this definition remains sensitive to the question about what happens when the disturbance is outside the scope of what was modelled. The typical failure mode here is one where the system reaches its limits and suddenly collapses. Woods states that brittleness tends to just live at boundaries of robustness. Cybersecurity is an interesting domain here where you can be extremely robust to specific types of threats, but once the attack is novel, using a different approach, everything goes bad. The naive understanding of robustness is that you can continuously expand the envelope of the stressors you can cope with. In practice, empirical research has shown that it is in fact more often a tradeoff: the things you can handle mean there are other things to which you become more fragile. This, once more, pushes towards the two latter definitions, which focus more on ways to adapt than ways to predict, because that tradeoff is more and more considered to be fundamental and unavoidable (think, for example, of heuristics and limits to attention). Graceful Extensibility Graceful extensibility is a sort of play on the idea of graceful degradation. Rather than asking the question how or why do people, systems, organizations bounce back, this line of approach asks: how do systems stretch to handle surprises? Resources are finite, environments changing, and their boundaries shift in ways that requires stretching and elasticity. A tenet here is that without the ability to stretch and adjust, your brittleness is far more severe than expected during normal operations, and generally exposed through extremely rapid collapses. So a big question is where's the boundary? We never know, incidents define it. There's a rate and tempo to events that let us get a glimpse of what it might be, so they can be looked at, tracked, and exercised. A common challenging scenario is how an organism that deals with "normal" challenges deals with two of them happening at once, for example, because this risks overextending the system. The idea here is influenced by Safety-I (studying and preventing failures) vs. Safety-II (studying and enhancing successes), such that graceful extensibility can be seen as a positive attribute: how do we create a readiness-to-respond that is a strength and can be leveraged in all sorts of situations, rather than narrowing it to being the avoidance of negative effects? Contrasted with rebounds, the approach to this is to look at past challenges, and see them as a way to gauge the potential to adapt to new surprises in the future. It also allows the idea of studying sequences and series of rebounds on a longer-term view of the system. How do they succeed and how do they fail? The idea is that they tend to fail when exhausting their capacity to mobilize response as disturbances grow and cascade, something dubbed decompensation. This tends to be detected when the ability to recover from a crisis takes longer and longer, which is the impending sense of a tipping point or collapse. The positive version of it is the anticipation of bottlenecks and crunches, and being able to deal with them. There are things that can be done to aid this resilience potential, but it contains its own challenges, where an organization can hinder its own capacity while trying to improve it. This leads to the fourth definition... Sustained Adaptability This refers to the ability manage/regulate adaptive capacities of systems. In short, while the past can be used to calibrate the potential for future resilience, the past is also not predictive and you can hit walls where the capacity is gone. Resilience-as-sustained-adaptability asks 3 questions:
An agenda of this type of resilience is in managing capacities dedicated to resilience. In this perspective, it makes sense to say a system is resilient, or not, based on how well it balances all the tradeoffs, or not. ----- Woods states that the yield from the first two types of resilience has been low. The latter two approaches, the most positive ones, tend to provide better lines of inquiries, though the discipline is still young.
|
# ? Feb 20, 2022 23:53 |
|
Since my last post was large and less about ergonomics, you also get a reference to a really cool short paper, The Problem with Checklists, which analyzes the transfer of their use from the airline industry to the healthcare industry (which was reported as a huge success in The Checklist Manifesto) and instead analyzes the way their adoption failed to prove actual improvements in western healthcare, often due to their design. It's short enough (and with pictures!) that it doesn't need cliff notes.
|
# ? Feb 21, 2022 14:03 |
|
If a plane crashes, a bunch of congresspeeps ream the airline c-levels. This, not checklists, is the basic feedback loop that means that air flight is the safest non-elevator powered transportation.
|
# ? Feb 21, 2022 16:48 |
|
Why is the faa unequivocably one of the only functional civilian govt agencies? The faa head shares the dressing-down when a plane crashes too
|
# ? Feb 21, 2022 16:54 |
|
valley brain callin in with the make the entire plane out of black box takes
|
# ? Feb 21, 2022 17:16 |
|
bob dobbs is dead posted:Why is the faa unequivocably one of the only functional civilian govt agencies? The faa head shares the dressing-down when a plane crashes too in general it is a bit of a mystery how the us of all places managed to create a bunch of really good regulatory bodies (since destroyed to some extent). the fda is also up there with the faa in doing a ton of the gruntwork on drug safety for the whole world. and i say this now living back in europe and convinced that europe is better at most government things than the us.
|
# ? Feb 21, 2022 17:22 |
Cybernetic Vermin posted:in general it is a bit of a mystery how the us of all places managed to create a bunch of really good regulatory bodies (since destroyed to some extent). the fda is also up there with the faa in doing a ton of the gruntwork on drug safety for the whole world. hell, ive never lived in the us states proper and i think that we don’t have anything here to compete with fda (the f part specifically), faa, nhtsa, and that own smaller industrial safety thing which is not osha but really cool. in general, americans suck at making a cohesive government, but are quite good at making boutique agencies
|
|
# ? Feb 21, 2022 17:39 |
|
MononcQc posted:Since my last post was large and less about ergonomics, you also get a reference to a really cool short paper, The Problem with Checklists, which analyzes the transfer of their use from the airline industry to the healthcare industry (which was reported as a huge success in The Checklist Manifesto) and instead analyzes the way their adoption failed to prove actual improvements in western healthcare, often due to their design. This is a good article. ~As a pilot~ I was like "But checklists are great! Why wouldn't they work in healthcare?" and then I got to this part: quote:For an Airbus A319 (figure 1), a single laminated gatefold (four sides of normal A4 paper) contains the 13 checklists for normal and emergency operations. Tasks range from 2 (for cabin fire checklist) to 17 (for before take-off checklist), with an average of seven per checklist. Each task is described in no more than three words and can be checked immediately, with usually a single word of confirmation. It has no check boxes, does not require signature and is designed to be used by one person, with specific checklists per-formed aloud. Lmaoing @ "empower staff" and requiring a signature on the checklist to proceed. Talk about cargo cult science
|
# ? Feb 21, 2022 17:49 |
|
bob dobbs is dead posted:If a plane crashes, a bunch of congresspeeps ream the airline c-levels. This, not checklists, is the basic feedback loop that means that air flight is the safest non-elevator powered transportation. you shame the thread with this awful take i tried looking for history papers on the origins and development of aviation safety culture as it exists today but didn't really find anything obvious, someone please send he;lp
|
# ? Feb 21, 2022 18:49 |
|
The ntsb's precursor was founded in response to a notre dame football coach dying in a 1931 fokker f-10 crash which led to national outcry and open demands from the president and congress and mass firings and reorganizations of government bureaus importantly, the purges also banned hiding details of aircraft investigations the serious political power arrayed in support of airline safety cannot be discounted. that's not a 'take' compare to the 1951 crash that killed basically the entire soviet military ice hockey team that got no significant organizational responses from the ussr: thats a material part of why aeroflot had more than half the world's aviation fatalities until the 90s bob dobbs is dead fucked around with this message at 19:14 on Feb 21, 2022 |
# ? Feb 21, 2022 19:02 |
|
TheFluff posted:you shame the thread with this awful take imo it seems like mostly a tradition handed down from pilot to pilot rather than anything done by the faa. it probably starts with pilots trained by the us military and spreads out from there. additionally for general aviation theres a high cost of entry which is gonna filter out people doing it on a whim who might not take safety as seriously.
|
# ? Feb 21, 2022 19:03 |
|
bob dobbs is dead posted:The ntsb's precursor was founded in response to a notre dame football coach dying in a 1931 fokker f-10 crash which led to national outcry and open demands from the president and congress and mass firings and reorganizations of government bureaus you absolutely cannot attribute the better part of a century of institution building to a single event, that isn't how history works Shaggar posted:imo it seems like mostly a tradition handed down from pilot to pilot rather than anything done by the faa. it probably starts with pilots trained by the us military and spreads out from there. additionally for general aviation theres a high cost of entry which is gonna filter out people doing it on a whim who might not take safety as seriously. the us military had an absolutely godawful safety culture until relatively recently so this doesn't ring true to me at all
|
# ? Feb 21, 2022 19:49 |
|
its an entire century of congress yelling at people, no joke
|
# ? Feb 21, 2022 19:50 |
|
TheFluff posted:you absolutely cannot attribute the better part of a century of institution building to a single event, that isn't how history works bud holland air show practice safety culture
|
# ? Feb 21, 2022 19:54 |
|
bob dobbs is dead posted:its an entire century of congress yelling at people, no joke TheFluff posted:you absolutely cannot attribute the better part of a century of institution building to a single event, that isn't how history works it's half and half. bob's thesis that aviation safety only advances through congressmen yelling at CEOs is wrong, but he is partially correct in that basically every rule is written in blood. for instance, the FAA was created specifically in response to this incident: https://en.wikipedia.org/wiki/1956_Grand_Canyon_mid-air_collision but that doesn't mean every development in aviation safety came from a congressional inquiry. a shitload of them came from WW2 experiences, for instance. e.g. making controls have differently shaped levers so that they are harder to confuse by touch, or orienting the six basic gauges in the same configuration in every plane, or using standardized colors on the airspeed indicator. all can be traced to some fuckup in ww2 where people died. many more came from pure research into human factors conducted by NASA and other institutions. many of the passive aerodynamic safety characteristics designed into modern planes came from NASA. to this day they maintain a really good aviation safety reporting program that collects more than 100,000 incident reports a year and uses them to issue new safety directives. some advances came totally at random through personal experiences. jet fuel nozzles are oblong and don't fit round avgas filler tanks; this feature was independently invented by a pilot whose plane was misfueled at an air show, leading to engine failure shortly after takeoff. he survived and came up with a solution. congress also pulls some really dumb poo poo with aviation safety. back in the day, in order to fly in the airlines, you needed 250 hours to be a first officer (i.e. copilot), and 1500 hours to be a captain. in 2008 a small airliner crashed and 50 people died. i will just quote wikipedia on what happened: quote:Following the clearance for final approach, landing gear and flaps (5°) were extended. The flight data recorder (FDR) indicated the airspeed had slowed to 145 knots (269 km/h; 167 mph).[3] The captain then called for the flaps to be increased to 15°. The airspeed continued to slow to 135 knots (250 km/h; 155 mph). Six seconds later, the aircraft's stick shaker activated, warning of an impending stall, as the speed continued to slow to 131 knots (243 km/h; 151 mph). The captain responded by abruptly pulling back on the control column, followed by increasing thrust to 75% power, instead of lowering the nose and applying full power, which was the proper stall-recovery technique. That improper action pitched the nose up even further, increasing both the g-load and the stall speed. The stick pusher activated (The Q400 stick pusher applies an airplane-nose-down control column input to decrease the wing's angle of attack (AOA) after an aerodynamic stall),[3] but the captain overrode the stick pusher and continued pulling back on the control column. The first officer retracted the flaps without consulting the captain, making recovery even more difficult.[24] stall recovery is taught in like lesson 2 of your private pilot certificate. doing what the captain and first officer both did here would be an instant "examiner grabs the controls out of your hands" fail of your checkride and probably would lead to an investigation questioning your instructor's judgment in sending you to the test. the captain had 3379 flight hours and the first officer had 2244. congress responded to the crash by changing the law so that you now need 1500 hours to be an airline first officer. Sagebrush fucked around with this message at 06:17 on Feb 22, 2022 |
# ? Feb 22, 2022 06:12 |
|
Shame Boy posted:i came across this linked from an NTSB report if anyone's interested, it's basically a big list of "how to make a cockpit correctly" that aggregates a bunch of different rules from the FAA and DoD and data from studies. most of it is just big lists of rules to follow but it's got little summary sections after each big list that describes the reasons why you should do things that way along with examples: I was drafting a work blog post today that complains about bad dashboard design and mentioned things discussed in this thread already (eg. there is more signal than attention available, and I guarantee you that most heavy dashboards have lots of metrics willfully ignored or that nobody knows what they mean) and a coworker commented "like when an airplane crashes because the pilot can't hear some alarm" and I was like "oh no, planes actually have well-designed dashboards" and could instantly refer to that document. so uh yeah thanks for the cool reference.
|
# ? Feb 23, 2022 02:37 |
|
Tbf to him, one of the many, many factors that led to the 737 MAX crashes was that the alarm system to alert the pilots to a mismatch in angle of attack sensor data was an optional feature. It's not alarm fatigue, but I bet he heard that about the 737s and that's where he was going with it. How can the system be broken if there's no way to tell? Checkmate.
|
# ? Feb 23, 2022 02:40 |
|
Related: https://www.youtube.com/watch?v=2hMn7ZweF6s That insistent beeping is the alarm that goes off when your manifold pressure is lowered to a descent idle (implying that you're about to land) but the gear is still up. Clearly both of the pilots were just completely accommodated to it.
|
# ? Feb 23, 2022 02:44 |
|
Sagebrush posted:Tbf to him, one of the many, many factors that led to the 737 MAX crashes was that the alarm system to alert the pilots to a mismatch in angle of attack sensor data was an optional feature. It's not alarm fatigue, but I bet he heard that about the 737s and that's where he was going with it. i recently read one of the preliminary NTSB reports for that whole thing and they did apparently get several (more general) instrument disagree alarms, so i'm not entirely sure that would have helped much more but idk
|
# ? Feb 23, 2022 02:49 |
|
alarms should hurt more clearly
|
# ? Feb 23, 2022 03:07 |
|
Sagebrush posted:Tbf to him, one of the many, many factors that led to the 737 MAX crashes was that the alarm system to alert the pilots to a mismatch in angle of attack sensor data was an optional feature. It's not alarm fatigue, but I bet he heard that about the 737s and that's where he was going with it. A more specific situation for cockpit automation afaict is something called mode error. The book Behind Human Error introduces it through one of the most fascinating incident reports, the Grounding of the Royal Majesty, where a ship had a disconnected GPS antenna and was (unbeknownst to the crew) fully in dead reckoning mode, and ended up grounding itself after days of being slightly off course. This is the sort of thing that I believe also describes the 737 MAX better. Here's the Behind Human Error description: and another, shorter example than the Royal Majesty: Once again, that book is my bible around incidents
|
# ? Feb 23, 2022 03:08 |
|
lol at OPEN DESCENT having multiple manual and automatic modes of engaging big tesla energy
|
# ? Feb 23, 2022 11:58 |
|
Yeah Tesla, as a tech-industry-derived type of approach, does very clumsy automation that is rife with mode errors, but also all the problems pointed out by Lisanne Bainbridge in 1983 in The Ironies of Automation. Their sort of internal motto they have about "each human input is an error" also lines up with the bad attitude that was pointed out in the papers I reviewed here 2 and 3 weeks ago, where they try to focus the automation in being more and more independent rather than making in cooperative, and necessarily being an unproductive approach. It's the standard sci-fi dream, but generally, decades of cybernetics and human factors research has found time and time again that you run into goal misalignment and an inability to deal with something called the context gap (which I should cover at some point, but essentially means: the automation does not know when its model of the world is incorrect and necessarily needs a human to do it for them because we -- often but not always -- have the capacity to do it), which in turn lead to incidents.
|
# ? Feb 23, 2022 15:05 |
|
New one I had never read before: The 'Problem' with Automation: Inappropriate Feedback and Interaction, not 'Over-Automation' by Don Norman (from the design of everyday things) To sort of counteract my takes while still aligning with the previous papers, this one (from 1990) made the argument that the problem wasn't that there was too much automation, but that it should be either less powerful, or a lot more powerful. That's mostly because he believes the problem is one of feedback. A caveat here is that my post on Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity is about a paper that's 14 years newer, and therefore more comprehensive. So I would keep in mind reading this Don Norman's one that it specifically tackles a subset of the dynamic -- one of device-to-person communication, but ignores a significant part of the rest of sociotechnical dynamics. anyway, let's get going with Don Norman: quote:the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations under normal operating conditions are performed appropriately, but there is inadequate feed back and interaction with the humans who must control the overall conduct of the task. When the situations exceed the capabilities of the automatic equipment, then the inadequate feedback leads to difficulties for the human controllers. One of the basic problems around the feedback is that it used to be very implicit if you were co-located with the room where all the work was done. But automation tends to create distance between the operator and the system. Rather than being around, feeling things, you read numbers and get a large level of indirection. Don Norman says that this mental isolation is one of the main sources of problems. The argument is made by comparing 3 incidents in airlines:
The general idea is that the automation was not the problem; the differentiating factor was the level of feedback: quote:In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other. But in the first situation, if problems occur, the autopilot will compensate and the crew will notice only by chance (as in the case study of the fuel leak). When automatic devices compensate for problems silently and efficiently, the crew is 'out of the loop', so that when failure of the compensatory equipment finally occurs, they are not in any position to respond immediately and appropriately. So what are the solutions proposed? Generally, feedback is missing more than it is too present (at the time); it's also essential to learn and and know whether commands sent were effective. So the idea is more feedback. But there's another problem: we don't know how to actually give that feedback properly. Don Norman specifies that in all cases above, information was available, just not processed properly. The feedback was present but not getting the attention required. And usually the only way we know of grabbing attention is more alarms: quote:The task of presenting feedback in an appropriate way is not easy to do. Indeed, we do not yet know how to do it. We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems. The proliferation of these alarms and the general unreliability of these single-threshold events causes much difficulty. And so we get closer to issues of AI: to modulate the way such a system interacts with people in a joint activity, the automation needs to understand the meaningfulness of changes according to ongoing goals and objectives. It has to be aware of its own abilities and limits. And we do not know how to do that. Don Norman concludes that the problem isn't that automation is too powerful, but that it isn't powerful enough. I would personally like to augment this with Klein/Woods (the joint activity paper I linked above) which rather re-frame the idea that automation shouldn't be more independent, but a better teammate. A final word of warning from Don Norman: quote:Today, in the absence of perfect automation an appropriate design should assume the existence of error, it should continually provide feedback, it should continually interact with operators in an appropriate manner, and it should have a design appropriate for the worst of situations. What is needed is a soft, compliant technology, not a rigid, formal one.
|
# ? Feb 26, 2022 19:04 |
|
posting this because I used it at work today
|
# ? Mar 1, 2022 00:32 |
|
MononcQc posted:posting this because I used it at work today Did you reinvent The Ribbon?
|
# ? Mar 1, 2022 00:41 |
|
mostly it was the idea that dashboard metrics have to be chosen carefully because during an outage people have less bandwidth, not more, so you have to pick a restricted set of values that are likely to generally be useful to provide vitals (which are interpreted based on current context) rather than trying to add tons of metrics that provide their own context to an ongoing incident. Someone during the discussion did mention something like the ribbon, but the difference is that the ribbon idea is that "everyone is a slightly different power user" whereas this study is about "everyone using the system under high pressure restricted their use of the device's capabilities and stuck with familiar paths to require less mental bandwidth" Both result in users vastly under-using the system's capabilities, but for very different reasons.
|
# ? Mar 1, 2022 01:01 |
|
MononcQc posted:mostly it was the idea that dashboard metrics have to be chosen carefully because during an outage people have less bandwidth, not more, so you have to pick a restricted set of values that are likely to generally be useful to provide vitals (which are interpreted based on current context) rather than trying to add tons of metrics that provide their own context to an ongoing incident. this is a great argument for producing simpler, less capable systems.
|
# ? Mar 1, 2022 01:19 |
|
somewhat yeah. The idea is really that the tech has to get out of the way. Everyone buys and ships checklists of features and people adjust to them haphazardly, but real high-performance poo poo used in the heat of the moment needs to have that ability to be useful while not benefiting from your full undivided attention. It should be there to support you and augment you, not for you to keep configuring it. One paper I love probably states it best through its title: "I want to treat the patient, not the alarm" (it's Dr. Karen Raymer's thesis, with the less entertaining subtitle: User image mismatch in Anesthesia alarm design). MononcQc fucked around with this message at 04:13 on Mar 1, 2022 |
# ? Mar 1, 2022 03:58 |
|
|
# ? Jun 8, 2024 00:20 |
|
MononcQc posted:One paper I love probably states it best through its title: "I want to treat the patient, not the alarm" its the same for any toolmaker. Fundamentally you need to realize that the tools you make are not center stage, the things they're using the tools to make are.. Toolmakers who haven't internalized this deep wisdom consistently make bad tools.
|
# ? Mar 1, 2022 05:46 |