The evaluation strategy for the MIXER application is in two distinct strands. The first is a longitudinal study which focuses on the learning outcomes of the MIXER application, the second is a one day intervention called Show Time which focuses on the evaluation of the more ‘difficult to evaluate’ aspects of the projects R&D themes. These involve an element of Transmedia Evaluation.

Transmedia Evaluation is an approach designed to remove this unnecessary evaluation encumbrance from the participant, whilst still gaining quality evaluation results and findings. With Transmedia Evaluation the user’s role and activities remain intact and consistent with their interaction reality, whether the user be player, social networker or commentator.

For further information on the Transmedia Evaluation approach please visit: http://ecute.eu/aaei/

Longitudinal Study Overview

In our evaluation of MIXER we had three conditions, with 20-30 children participating in each:

  • Single Interaction: children engage once with MIXER
  • Double Interaction: children engage twice with MIXER
  • Passive MIXER: children watch an interaction with MIXER
  • Control: children watch a video on camping (Control Group Activity Plan).

All conditions completed the pre-test and post-test which were provided as in the form of activity books with each child completing a series of activities, including selected factors from 3 questionnaires: Matson Evaluation of Social Skills with Youngsters (Matson, Rotatori, & Helsel, 1983), Bryant’s Empathy Index for Children (Bryant, 1982; Lasa Aristu, Holgado Tello, Carrasco Ortiz, & Del Barrio Gándara, 2008), the Cultural Intelligence Scale (Ang et al., 2007) After the children have interacted with MIXER they complete an Engagement Questionnaire, based on the Agent Evaluation Questionnaire (Hall, Woods & Aylett, 2006).

Evaluation Activity Books


Ang, S., Van Dyne, L., Koh, C., Ng, K. Y., Templer, K. J., Tay, C., & Chandrasekar, N. A. (2007). Cultural intelligence: Its measurement and effects on cultural judgment and decision making, cultural adaptation and task performance. Management and Organization Review, 3(3), 335-371.
Bryant, B. B. K. (1982). An index of empathy for children and adolescents. Child development, 53(2), 413–425. doi:10.2307/1128984
Hall, L., Woods, S. and Aylett, R. (2006) FearNot! Involving children in the design of a Virtual Learning Environment, Journal of Artificial Intelligence and Education: Special Issue on Learner Centred Methods for Designing Intelligent Learning Environments, 16(4), pp. 237-251.
Hall, L. and Hume, C. (2012) Extending the Story into the Feedback Loop: Transmedia Evaluation, 1st Global Conference Immersive Worlds and Transmedia Narratives, 13-15 November, Salzburg
Hall, L. Jones, S., Aylett, R., Hall, M., Tazzyman, S., Paiva, A. and Humphries, L. (in press). Serious Game Evaluation as a Metagame, Journal of Interactive Technology and Smart Education.
Lasa Aristu, A., Holgado Tello, F. P., Carrasco Ortiz, M. A., & Del Barrio Gándara, M. V. (2008). The structure of Bryant’s Empathy Index for children: a cross-validation study. The Spanish journal of psychology, 11(2), 670–7. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/18988452
Matson, J., Rotatori, A., & Helsel, W. (1983). Development of a rating scale to measure social skills in children: The Matson Evaluation of Social Skills with Youngsters (MESSY). Behaviour Research and therapy, 21(4), 335–340. Retrieved from http://www.sciencedirect.com/science/article/pii/0005796783900013