Document Type
Article
Publication Date
1-1-2017
Publication Title
IEEE Transactions on Visualization and Computer Graphics
Abstract
Visual analytic systems have long relied on user studies and standard datasets to demonstrate advances to the state of the art, as well as to illustrate the efficiency of solutions to domain-specific challenges. This approach has enabled some important comparisons between systems, but unfortunately the narrow scope required to facilitate these comparisons has prevented many of these lessons from being generalized to new areas. At the same time, advanced visual analytic systems have made increasing use of human-machine collaboration to solve problems not tractable by machine computation alone. To continue to make progress in modeling user tasks in these hybrid visual analytic systems, we must strive to gain insight into what makes certain tasks more complex than others. This will require the development of mechanisms for describing the balance to be struck between machine and human strengths with respect to analytical tasks and workload. In this paper, we argue for the necessity of theoretical tools for reasoning about such balance in visual analytic systems and demonstrate the utility of the Human Oracle Model for this purpose in the context of sensemaking in visual analytics. Additionally, we make use of the Human Oracle Model to guide the development of a new system through a case study in the domain of cybersecurity.
Keywords
human oracle, mixed initiative systems, semantic interaction, sensemaking, Theoretical models, visual analytics
Volume
23
Issue
1
First Page
121
Last Page
130
DOI
10.1109/TVCG.2016.2598460
ISSN
10772626
Recommended Citation
Crouser, R. Jordan; Franklin, Lyndsey; Endert, Alex; and Cook, Kris, "Toward Theoretical Techniques for Measuring the Use of Human Effort in Visual Analytic Systems" (2017). Computer Science: Faculty Publications, Smith College, Northampton, MA.
https://scholarworks.smith.edu/csc_facpubs/208
Comments
Peer reviewed accepted manuscript.