Designing For Sensemaking: A Macrocognitive Approach Robert Hutton, Gary Klein, PhD., & Sterling Wiggins Klein Associates Division, Applied Research Associates Contact: [email protected]; 937.873.8166 Overview Sensemaking has typically been investigated in the context of organizations (Weick, 1995), and post hoc explanations of accidents (Dekker & Lutzhoft, 2005). The approach that will be represented here has been to understand the process of sensemaking in real time decision making environments, for individuals and teams (Klein et al., 2006a; 2006b; Sieck et al, 2007). Recent work has explored a conceptual model of sensemaking, a methodology for understanding sensemaking in a work domain and leveraging that understanding for improving the design of supporting technologies (Klein et al., 2004; Dominguez et al, 2006). We are also currently applying this approach to understanding sensemaking in three different design contexts: a set of instructional workshops with systems designers and technologists, a project collaborating with military human factors analysts (both with the Singaporean Ministry of Defence), and in the context of an command center team in submarines. Our approach to leveraging an understanding of sensemaking has been a four pronged effort. First we have explored the nature of sensemaking as the process by which individuals and teams recover their understanding of a situation following a surprise, violated expectancy or growing suspicion that the situation is not as it was initially understood to be (the theory piece). Second, we have developed a customized cognitive task analysis approach that focuses on identifying sensemaking challenges and sensemaking requirements for system support (the methodology piece). Finally, we are currently in the process of integrating this approach into a Cognitive Systems Engineering design approach called “Decision-Centered Design” (the design piece). Finally, we are currently applying this understanding and this methodology in the context of improving design through a series of workshops and through collaboration on a project to develop improved displays for mobile battle command (the application piece). This paper briefly describes each effort.

Theory: The Data-Frame Model of Sensemaking

Figure 1. The Data-Frame Model of Sensemaking (Klein et al., 2006b) The Data-Frame Model of Sensemaking (Figure 1) has been proposed to fill a gap in our understanding of expert cognition. Experts make decisions primarily through an understanding of the current situation, how it got there and where it is going (Klein, 1993). The term situation(al) awareness has been widely used to capture the nature of this understanding (Endsley, 1995). However, Endsley’s description of situation awareness refers primarily to a model of the current situation held in working memory (a product of assessment). How that model is developed, how it is maintained in the presence of conflicting and uncertain information and how it is recovered following a surprise or the growing suspicion that the current understanding is incorrect (the processes of situation assessment and sensemaking) are not explicated to any great extent in the current literature or research on situation awareness. The conceptual model of sensemaking was intended to fill this gap (Klein et al, 2006a; 2006b). The model was developed based on several studies exploring situation assessment, situation awareness and sensemaking utilizing existing cognitive task analysis methods (primarily observations of performance in real and training environments, interviews, and incident accounts from past projects) (Sieck, et al., 2007). Sensemaking is defined as the deliberate effort to understand events and is typically triggered by unexpected changes or surprises that make a decision maker doubt their prior understanding. Sensemaking is the active process of building, refining, questioning and recovering situation awareness. The Data-Frame sensemaking model provides a description of how people generate an initial account to explain events, how they elaborate on that explanation in the face of new information, how they question the

explanation when some of the new data does not fit the story, how they sometimes fixate on a current explanation, how they discover the inadequacies of that account, and finally how they reframe their account based on competing accounts or newly generated explanations. These are all active processes that need to be supported in technologies and training in order to support the development of a current, accurate and useful understanding of the situation. The Data Frame model provides a description of how these are all achieved for the purposes of problem detection, making new discoveries, generating expectancies about how the situation might evolve, and identifying leverage points for problem solving and decision making. The crux of the model is the reciprocity of the data and the explanation, or frame. A frame is an explanatory structure that defines entities by describing their relationship to other entities. The frame may take the form of a mental representation of a plan or script, a map, a story, or a mental model. It is a structure that accounts for the data on the one hand, but also guides the search for new data and identifies what, amongst the myriad information, counts as data. Sensemaking is not merely joining the dots or generating inferences, but also identifying what counts as a dot, and how to go about seeking new dots. It is finding a frame that accounts for the data, as well as directing the seeking of new data to elaborate or question the frame. The goal of the process of sensemaking is a functional understanding that supports decisions and action. Methodology: Tailored Cognitive Task Analysis In order to understand the specific challenges that sensemaking incurs in a specific work domain, we have had to customize existing cognitive task analysis methodologies to uncover the sensemaking requirements that must be supported in that domain. These methods include knowledge elicitation as well as knowledge representation techniques (Crandall & Klein, 2007). The primary knowledge elicitation technique is the Sensemaking Critical Incident Method (SCIM). This method is a direct derivative of the Critical Decision Method (CDM; Klein, Crandall & Macgregor, 1989). This interview method focuses on the elicitation of a critical incident where the interviewee experienced a surprise or growing sense of doubt with respect to how s/he understood the situation. A second “sweep” of the incident elicits a timeline on which the initial frame is identified, the surprise or violated expectancy is identified, and subsequent information seeking and analysis occurred in order to recover situation awareness. Thirdly, probes are used to understand the assessment and understanding components of the different elements of the account. Probe questions are intended to explore the information sources, information exchange, information integration, questioning the initial assessment, elaborating on the initial assessment, or comparing competing assessments until an alternative assessment is generated, as well as the barriers to effective sensemaking (including knowledge shields, process and technology issues). The primary means of analyzing these incidents is focused on a Sensemaking Requirements Table (SRT) which is a direct derivative of cognitive demands tables

(described in Applied Cognitive Task Analysis; Militello & Hutton, 1998) and decision requirements tables (DRTs). The column headings include: why was the element challenging, what made it difficult? What were the critical cues and anchors for the different assessments at various points in time? What strategies were used to: filter information, synthesize new information, discard information, identify new information needs, reconsider old information, and so forth. The final column captures seeds for design ideas that might address the different sensemaking challenges identified in the incident. An optional column in the SRT is potential novice errors or elements of the sensemaking process that prove challenging and lead to errors by, or provide barriers to less experienced personnel. These are areas where technology (sensors, data processing and displays) might be explored to support the user of the system to make sense of the information that is available to them. We have also explored alternative methods for eliciting the sensemaking challenges including: observations focused on sensemaking and identifying “hotspots” while observing individuals and teams who are attempting to recover from a misunderstanding of a situation; and non-incident based methods that elicit sensemaking challenges that provide the opportunity for observation, think aloud protocols and/or interview methods (such as a simulation interview where a scenario is presented to the interviewee). The focus of all these methods is on how the individual or the team come to actively and deliberately understand a situation, especially in the context of a surprise or growing suspicion that the current understanding is not supporting effective action. Application: Decision-Centered Design Having formulated an approach to understanding sensemaking in a specific work context, we have recently attempted to communicate these ideas and methods in a package that technologists can use to address these sensemaking challenges into the design requirements for the systems that they are building. We have previously presented sensemaking as one of the cognitive drivers of design in the context of a DecisionCentered Design (DCD) approach, a focused Cognitive Systems Engineering methodology (Hutton, Miller & Thordsen, 2003; Crandall, Klein & Hoffman, 2007). Sensemaking is one of a number of macrocognitive functions that needs to be supported by design. Some tasks are more focused on assessing situations and sensemaking and therefore DCD is driven by the sensemaking challenges. In other cognitive work domains, sensemaking may be less critical. We have three projects currently where we are focusing on improving the treatment of sensemaking in the cognitive systems engineering process. We recently completed two projects with groups who are building systems to support network centric warfare technologies. With one group we presented a series of instructional workshops that introduced: 1) the theory and models of macrocognition and particularly focusing on sensemaking; 2) the methods for “seeing” sensemaking in the work domain for which they are designing; 3) the DCD design process that leverages the findings of the CTA methods in support of designing for individuals and teams; and, 4) metrics and measures for assessing whether sensemaking performance is being improved

by the design ideas that are generated. With the second group, we are working with them to implement the same workshop ideas but in a series of smaller workshops and more hands-on practice and feedback in the context of a specific design project to build a suite of tools to support a commander working with his Battalion from the back of his vehicle, on the move. In addition, we currently have a project where we are applying the methodology in the context of developing situation displays to support submarine commander sensemaking (Dominguez et al, 2006). A key challenge for the application of DCD (and CSE) in general, and specifically for designing for sensemaking, is how to measure performance improvements and show the value-added of the approach. We have developed strategies, measurement, and evaluation techniques to this problem for the projects mentioned above. The area of cognitive metrics for real world task environments is still not well documented and is an area ripe for further developments. We are working on a two pronged approach to evaluation and cognitive system performance measurement: qualitative and quantitative approaches. Our qualitative approach is based on evaluation criteria, based on macrocognitive dimensions and CSE principles (Long & Cox, 2007). Our quantitative approach is based on using certain evaluation or measurement paradigms (such as the “garden path” paradigm used by Rudolph (2003) to evaluate anesthetist’s diagnosis performance), and on metrics based on information and assessment quality judgments, time to perform a task or subtask, time to notice and act on new information, recovering from garden path fixation, frequency of key verbalizations, correctness of subsequent actions performed relative to some (expert) model, quality of explanations, accuracy of anticipations, and so forth). Lesson Learned & Challenges We have yet to conduct a complete evaluation of a system guided by the DCD for sensemaking approach described here. This is a major challenge for future work. We are currently supporting the development of an evaluation plan for one of the design groups that we worked with on the Battalion battle command on-the-move design project. We have conducted evaluations of our instructional workshops and for the development of paper system prototypes that have indicated positive reactions to the approach and its ability to generate insight into the sensemaking challenges that must be supported by an eventual design. One of the toughest challenges faced by human factors designers is in understanding the conceptual model that drives both the questions in the knowledge elicitation phase of requirements gathering and the representation of that information in support of design recommendations. It has been our experience that the initial knowledge elicitation requires a firm understanding of the conceptual models of macrocognition (“cognition in context”) in order to identify the cognitive challenges that then suggest design solutions. Given the lack of solid system performance data to support the benefits of CSE design approaches in general and our DCD approach in particular (for one exception see Klinger

et al., 1993), a major challenge remains convincing sponsors and customers that this approach is worthwhile and has the potential to: 1) Improve the quality of designs 2) Save money by reducing design iteration cycles 3) Avoid system breakdowns and catastrophic failures due to technology “making people stupid” (Klein , 2004) 4) Generate performance breakthroughs rather than merely meeting acceptance standards Some of the challenges in promoting this approach to designing systems that support cognition (inc. sensemaking) include: 1) It takes time to learn how to use these methods and understand cognitive work and how to support it. Do not expect miracles first time round. 2) Good design often looks obvious. Only performance against the original design or a design by an alternative design process will show which design process worked best, and which design provides value added. Well designed interfaces often look like the users think they should and therefore the users and sponsors may be disappointed. But this probably means you did it right! 3) Just because an interface or system is designed using a CSE/DCD method does not mean that the final product will look revolutionary, which sometimes disappoints sponsors/customers. 4) It is hard to nail down requirements from the outset. The role of CTA continues throughout the design process, from initial requirements to evaluating interim design ideas. It is not a lock-step, mechanical process. The value is in seeking the cognitive challenges as the drivers of effective system support and interface design. Sensemaking challenges are no easier to see, and will reveal themselves throughout the design process as different design hypotheses are developed and tested. The challenge is to identify ways to flex the design ideas and cognitive work requirements, and to identify measures to track (cognitive) performance changes. Summary This approach is intended to illuminate the critical importance of building an effective understanding of a situation that is dynamic and full of uncertainty, where surprises and doubt often invalidate the initial understanding of the situation. The approach provides a theoretical basis for understanding the sensemaking processes, a set of methods for exploring and uncovering sensemaking in a work domain, and an application framework that supports the design of systems and interfaces to support improved sensemaking. Challenges to refining this design approach and improving systems that support cognitive work are identified. References Dekker, S., & Lutzhoft, M. (2004). Correspondence, cognition and sensemaking: A radical empiricist view of situation awareness. In S. Banbury & S. Tremblay (Eds.), A

Cognitive Approach to Situation Awareness: Theory and Application. Ashgate Publishing. Dominguez, C., Long, W.G., Miller, T.E., Wiggins, S.L. (2006). Design directions for support of submarine commanding officer decision making. In Proceedings of 2006 Undersea HSI Symposium: Research, Acquisition and the Warrior, June 6-8, 2006, Mystic, CT, USA. Endsley (2000). Theoretical underpinnings of situation awareness: A critical review. In M. R. Endsley & D. J. Garland (Eds.), Situation Awareness Analysis and Measurement. Mahwah, NJ: Lawrence Erlbaum Associates. Klein, G. (1993). Recognition-Primed Decision making: Looking back, looking forward. In G. Klein, J. Orasanu, R. Calderwood C. E. Zsambok (Eds), Decision Making in Action: Models and Methods. Ablex Publishing. Klein, G. A. (2004). The Power of Intuition. New York: Doubleday. Klein, G., Moon, B., & Hoffman, R. R. (2006a). Making sense of sensemaking 1: alternative perspectives. IEEE Intelligent Systems, 21(4), July/August Klein, G., Moon, B., & Hoffman, R. R. (2006b). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems, 21(5), September/October. Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19(3), 462-472. Klein, G., Long, W. G., Hutton, R. J. B., & Shafer, J. (2004). Battlesense: An innovative sensemaking-centered design approach for combat systems. (Final report prepared under contract # N00178-04-C-3017 for Naval Surface Warfare Center, Dahlgren, VA). Fairborn, Ohio: Klein Associates Inc. Klinger, D. W., Andriole, S. J., Militello, L. G., Adelman, L., Klein, G., & Gomes, M. E. (1993). Designing for performance: A cognitive systems engineering approach to modifying an AWACS human-computer interface (No. Technical Report AL/CF-TR1993-0093). Wright-Patterson AFB, OH: Department of the Air Force, Armstrong Laboratory, Air Force Materiel Command. Long, W., & Cox, D. A. (2007, June 4-6). Indicators for identifying systems that hinder cognitive performance. Paper presented at the Eighth International Conference on Naturalistic Decision Making, Asilomar, CA. Militello, L. G., & Hutton, R. J. B. (1998). Applied Cognitive Task Analysis (ACTA): A practitioner’s toolkit for understanding cognitive task demands. Ergonomics Special Issue: Task Analysis, 41(11), 1618-1641. Rudolph, J. W. (2003). Into the big muddy and out again. Unpublished Doctoral Thesis, Boston College, Boston, MA. Sieck, W. R., Klein, G., Peluso, D. A., Smith, J. L., & Harris-Thompson, D. (2007). FOCUS: A Model of Sensemaking. ARI Technical Report 1200. Arlington, VA: US Army Research Institute for the Behavioral & Social Sciences. Weick, K. E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage Publications.

Acknowledgements This work is supported by contracts with: the Defence Science and Technology Agency (DSTA), Singapore; the Defence Science Organization National Laboratories (DSO), Singapore; the U.S. Naval Surface Warfare Center, Dahlgren; and, the U.S. Office of Naval Research.

Sensemaking: Building, Maintaining and Recovering ...

Contact: [email protected]; 937.873.8166. Overview ... time decision making environments, for individuals and teams (Klein et al., 2006a;. 2006b; Sieck et al, ... sensemaking requirements for system support (the methodology piece). Finally, we ...

184KB Sizes 0 Downloads 207 Views

Recommend Documents

Security-Operations-Center-Building-Operating-And-Maintaining ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

Social Information Foraging and Sensemaking
information flowing to him or her from a social network of collaborators. ... computational ecologies [6, 9, 10], library science [17], and anthropological studies of ...

Designing and Maintaining Software (DAMS) - GitHub
Page 1. Getting Lean. Designing and Maintaining Software (DAMS). Louis Rose. Page 2. Lean software… Has no extra parts. Solves the problem at hand and no more. Is often easier to change (i.e., is more habitable). Page 3. The Advice I Want to Give.

Designing and Maintaining Software (DAMS) - GitHub
Why not duplicate? Designing and Maintaining Software (DAMS). Louis Rose. Page 2. Habitable Software. Leaner. Less Complex. Loosely Coupled. More Cohesive. Avoids Duplication. Clearer. More Extensible ??? Page 3. Bad Practice. Page 4. Don't Repeat Yo

Designing and Maintaining Software (DAMS) - GitHub
“We have tried to demonstrate that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which

Designing and Maintaining Software (DAMS) - GitHub
Tools: Vagrant. Designing and Maintaining Software (DAMS). Louis Rose. Page 2. Bugs that appear in production and that can't be reproduced by a developer on their machine are really hard to fix. Problem: “It works on my machine”. Page 3. Why does

Understanding and Supporting Sensemaking in ...
Collaborative sensemaking, collaborative Web search,. SearchTogether ... structure, and visualize task-related information [2, 9]. Few researchers have .... CoSense uses data from a user's SearchTogether session and provides alternate ...

Sensemaking and its Handoff
examined in the light of general research on collaboration ... impact its collaborative handoff. ..... help of information on the web was the task used here.

Designing and Maintaining Software (DAMS) - GitHub
ASTs are tree data structures that can be analysed for meaning (following JLJ in SYAC 2014/15) ... More Cohesive. Avoids Duplication. Clearer. More Extensible.

Designing and Maintaining Software (DAMS) - GitHub
Open-source. Influenced by Perl, Smalltalk, Eiffel, Ada and Lisp. Dynamic. Purely object-oriented. Some elements of functional programming. Duck-typed class Numeric def plus(x) self.+(x) end end y = 5.plus(6) https://www.ruby-lang.org/en/about · http

Importance of Maintaining Continuous Errors and Omissions ...
Importance of Maintaining Continuous Errors and Omissions Coverage Bulletin.pdf. Importance of Maintaining Continuous Errors and Omissions Coverage ...

Designing and Maintaining Software (DAMS) - GitHub
Clear Documentation. Designing and Maintaining Software (DAMS). Louis Rose. Page 2. Bad documentation. Misleading or contradictory find_customer(id). CustomerGateway. Used to look up a customer by their customer number. Page 3. Bad documentation. Red

Designing and Maintaining Software (DAMS) - GitHub
%w.rack tilt date INT TERM..map{|l|trap(l){$r.stop}rescue require l};. $u=Date;$z=($u.new.year + 145).abs;puts "== Almost Sinatra/No Version has taken the stage on #$z for development with backup from Webrick". $n=Module.new{extend. Rack;a,D,S,q=Rack

Designing and Maintaining Software (DAMS) - GitHub
R&D: sketch habitable solutions on paper, using UML. 4. Evaluate solutions and implement the best, using TDD. Probably start again at 3. 5. Give to the product owner to validate. Probably start again at 1. 6. Put into production for customers to eval

Designing and Maintaining Software (DAMS) - GitHub
Habitable Software. Leaner. Less Complex. Loosely Coupled. More Cohesive. Avoids Duplication. Clearer. More Extensible ??? Page 3. Lean. “Perfection is finally achieved not when there is no longer anything to add, but when there is no longer anythi

Designing and Maintaining Software (DAMS) - GitHub
Fixes issue #42. Users were being redirected to the home page after login, which is less useful than redirecting to the page they had originally requested before being redirected to the login form. * Store requested path in a session variable. * Redi

Designing and Maintaining Software (DAMS) - GitHub
Automatically detect similar fragments of code. class StuffedCrust def title. "Stuffed Crust " +. @toppings.title +. " Pizza" end def cost. @toppings.cost + 6 end end class DeepPan def title. "Deep Pan " +. @ingredients.title +. " Pizza" end def cost

Designing and Maintaining Software (DAMS) - GitHub
Ruby Testing Frameworks. 3 popular options are: RSpec, Minitest and Test::Unit. We'll use RSpec, as it has the most comprehensive docs. Introductory videos are at: http://rspec.info ...

Designing and Maintaining Software (DAMS) - GitHub
Clear Names. Designing and Maintaining Software (DAMS). Louis Rose. Page 2. Naming is hard. “There are only two hard things in Computer. Science: cache invalidation and naming things.” - Phil Karlton http://martinfowler.com/bliki/TwoHardThings.ht

Designing and Maintaining Software (DAMS) - GitHub
Coupling Between Objects. Counts the number of other classes to which a class is coupled (other than via inheritance). CBO(c) = |d ∈ C - (1cl U Ancestors(C))| uses(c, d) V uses(d, c). - Chidamber and Kemerer. A metrics suite for object-oriented des

Designing and Maintaining Software (DAMS) - GitHub
Reducing duplication. Designing and Maintaining Software (DAMS). Louis Rose. Page 2. Tactics. Accentuate similarities to find differences. Favour composition over inheritance. Know when to reach for advanced tools. (metaprogramming, code generation).