Investigating the Feasibility of Extracting Tool Demonstrations from In-Situ Video Content
Abstract
Short video demonstrations are effective resources for helping users to learn tools in feature-rich software. However manually creating demonstrations for the hundreds (or thousands) of individual features in these programs would be impractical. In this paper, we investigate the potential for identifying good tool demonstrations from within screen recordings of users performing real-world tasks. Using an instrumented image-editing application, we collected workflow video content and log data from actual end users. We then developed a heuristic for identifying demonstration clips, and had the quality of a sample set of clips evaluated by both domain experts and end users. This multi-step approach allowed us to characterize the quality of 'naturally occurring' tool demonstrations, and to derive a list of good and bad features of these videos. Finally, we conducted an initial investigation into using machine learning techniques to distinguish between good and bad demonstrations.
BibTeX
@inproceedings{10.1145/2556288.2557142,
abstract = {Short video demonstrations are effective resources for helping users to learn tools in feature-rich software. However manually creating demonstrations for the hundreds (or thousands) of individual features in these programs would be impractical. In this paper, we investigate the potential for identifying good tool demonstrations from within screen recordings of users performing real-world tasks. Using an instrumented image-editing application, we collected workflow video content and log data from actual end users. We then developed a heuristic for identifying demonstration clips, and had the quality of a sample set of clips evaluated by both domain experts and end users. This multi-step approach allowed us to characterize the quality of 'naturally occurring' tool demonstrations, and to derive a list of good and bad features of these videos. Finally, we conducted an initial investigation into using machine learning techniques to distinguish between good and bad demonstrations.},
address = {New York, NY, USA},
author = {Lafreniere, Ben and Grossman, Tovi and Matejka, Justin and Fitzmaurice, George},
booktitle = {Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
doi = {10.1145/2556288.2557142},
isbn = {9781450324731},
keywords = {help, toolclips, in-situ usage data, feature-rich software, video tooltips, learning},
location = {Toronto, Ontario, Canada},
numpages = {10},
pages = {4007–4016},
publisher = {Association for Computing Machinery},
series = {CHI '14},
title = {Investigating the Feasibility of Extracting Tool Demonstrations from In-Situ Video Content},
url = {https://doi.org/10.1145/2556288.2557142},
year = {2014}
}