Narratives and interactivity have been widely used in data visualizations to present the data in a more compelling and accessible manner. However, research on on the effect of narratives or interactivity on insight generation has not been sufficiently researched.
We study the effect of using different visualization strategies, operationalised through two variables: interactivity (static or interactive) and narrative (non-narrative or narrative), on recall. We designed and conducted a controlled experiment of 400 participants on Amazons Mechanical Turk.
Participants were asked to interact with visualizations adapted from professionally designed visualizations to the four possible combinations of the two variables - interactivity and recall; they were then required to answer a 11-item True / False Questionnaire.
We used mixed effect models to analyse the data. Our results indicate a small positive effect due to the presence of narratives, but little or no effect due to the presence of interactivity, on the probability of answering a question correctly.
This was a 2 semester long project, submitted in partial fulfillment for the degree of Master of Science in Information at the University of Michigan
Visualization design & development
Data cleaning & manipulation
Model fitting & analysis
There are some commonly used design strategies for creating visualizations:
We thus define this design space for explanatory infovis.
What is the effect of the presence of interactivity and narratives on the audience and the ability to gather insight from the visualization?
Measured the effect of the presence of an introductory narrative component in an interactive visualization on user engagement. Found that the narrative component did not result in more engagement. But, engagement is an indirect way to measure insight.
Insight is to gain an accurate understanding of the data. Hence, we use recall and comprehension which allow us to observe the outcome of interest directly.
We created a catalog of professionally produced visualizations for which the data was publicly available, and which we felt could be adapted to each of the four quadrants of our proposed design space. We then identified 4 visualizations, each of which were adapted onto this design space, and developed using HTML, CSS and D3.js. Thus, we ended up with 4 visualizations x 4 conditions.
We ensured each visualization had the same degree of expressiveness - the visual encoding of the data expressed equal amount of information across each of the versions.
Created a 11 item True / False questionnaire for each visualization in Qualtrics, to measure recall and comprehension. One question served as an advanced attention check.
We used a completely between-subjects design and recruited 400 participants on MTurk.
49 participants failed (pre-registered) attention check question, of which 31 saw the same visualization. Performed analysis with and without rejected participants, because failing the attention check may have been due to a poor question. We found the results to be similar if we include or exclude those participants.
We used a bayesian mixed-effects logistic regression model to calculate probability of getting an answer correct.
In fig. 3(a), we have the raw probabilities of answering a question correctly for an average individual. In fig 3(b), we compare the difference between interactive-only and narrative-interactive conditions (top); and the difference between static, non-narrative and the narrative-only conditions (bottom).
We observe a small positive affect due to the presence of narratives, the size of this effect is getting one more question correct in our 11 item questionnaire.
In fig 3(c), to measure the effect of the presence of interactivity, we compare the difference between narrative-only and narrative-interactive conditions (top); and the difference between static, non-narrative and interactive-only conditions (bottom).
We can see that interactivity most likely doesn't have any effect on recall and comprehension. Thus, we can't conclude that the ability to interact with the visualization allows the viewer to gather more insights.
When we compare the results for each condition within each visualizatio, we see that the effects are more or less consistent and do not show a lot of variability across different visualizations.
The presence of narrative has a small positive effect in 5/8 of the possible comparisons, whereas interactivity again most likely does not show an affect on recall in all except 1 comparison.
There exists several challenges in measuring insight. Thus, we need newer evaluation methods which aim to measure insight directly.