Menu
Annotated Bibliography 13
Jones, A., Scanlona, E., Tosunoglu, C., Morris, E., Ross, S., Butcher, P., & Greenberg, J. (1999). Contexts for evaluating educational software. Interacting with Computers, 11, 499-516.
Jones, Scalona, Tosunoglu, Morris, Ross, Butcher, and Greenberg (1999) give their analysis on how multimedia programs were judged. First, the researchers were concerned that many instructors and administrators were not analyzing educational multimedia programs correctly. If programs are only judged by some criteria, the judgement is incomplete and therefore the instructors cannot make a sound educational decision for instructors and learners. Therefore, the researchers created the CIAO! framework for evaluation, which includes context (where does the instructional program happen), interaction (how do learners connect with the instructional program- includes learner products completed after the lesson), attitudes and outcomes (how do the learners/instructors feel about the instructional program, and was the program successful) (Jones et a., 1999, p. 503-504). However, this model has two problems- evaluating instructional goals (comparison of other instructional programs on effectiveness of the knowledge of the problem situation and attractiveness of the instructional program being evaluated) and the observations of the instructional program (Jones et al., 1999, p. 513-514).
Winslow, J., Dickerson, J., & Lee, C. (2013). Evaluating multimedia. Applied Technologies for Teachers(pp. 251-264). Dubuque, IA: Kendall Hunt.
Winslow, Dickerson, and Lee (2013) explain a way to judge instructional programs. First, programs are judged on their program value, which includes "...content validity, potential effectiveness as a teaching-learning tool, and ease of use" (Winslow, Dickerson, & Lee, 2013, p. 254). Second, learning outcomes should match what the instructor intended the learning to acgueve(Winslow, Dickerson, & Lee, 2013, p. 254). Third, the program should provide comments to answers and change to the learner's needs (Winslow, Dickerson, & Lee, 2013, p. 255). Fourth, the program should allow learners to be challenged gradually, relates to the learner, and the program is able to hold concentration from the learner (Winslow, Dickerson, & Lee, 2013, p. 256). Fifth, the program follows multimedia principles which makes the program optically pleasant to the learner (Winslow, Dickerson, & Lee, 2013, p. 256). Sixth, the program allows the learner to connect with the program by easy exploring of the program and good directions (Winslow, Dickerson, & Lee, 2013, p. 257). Seventh, the program allows the learner to rework the problem situation in different ways (Winslow, Dickerson, & Lee, 2013, p. 257). Eighth, the program follows the industrial guidelines set by accredited organizations (Winslow, Dickerson, & Lee, 2013, p. 257).
Lee, C. & Cherner, T. S. (2015). A comprehensive evaluation rubric for assessing instructional apps. Journal of Information Technology Education: Research, 14, 21-53. Retrieved January 22, 2015 from http://www.jite.org/documents/Vol14/JITEV14ResearchP021-053Yuan0700.pdf.
Lee and Cherner (2015) present a comprehensive judgement system for educational multimedia programs on phones or tablets. The researchers state that past judgement tools were not researched or did not follow specific criteria. Instructional programs should be judged by "Instruction...Design...[and] Engagement... " (Lee & Cherner, 2015, p. 25). In sum, the three areas has twenty-four specific criteria which an instructional program can be judged on. However, the researchers state the instructor should spend time to understand what the application is utilized for and understand the details of the judgement system before making a decision about the instructional multimedia program.
Jones, Scalona, Tosunoglu, Morris, Ross, Butcher, and Greenberg (1999) give their analysis on how multimedia programs were judged. First, the researchers were concerned that many instructors and administrators were not analyzing educational multimedia programs correctly. If programs are only judged by some criteria, the judgement is incomplete and therefore the instructors cannot make a sound educational decision for instructors and learners. Therefore, the researchers created the CIAO! framework for evaluation, which includes context (where does the instructional program happen), interaction (how do learners connect with the instructional program- includes learner products completed after the lesson), attitudes and outcomes (how do the learners/instructors feel about the instructional program, and was the program successful) (Jones et a., 1999, p. 503-504). However, this model has two problems- evaluating instructional goals (comparison of other instructional programs on effectiveness of the knowledge of the problem situation and attractiveness of the instructional program being evaluated) and the observations of the instructional program (Jones et al., 1999, p. 513-514).
Winslow, J., Dickerson, J., & Lee, C. (2013). Evaluating multimedia. Applied Technologies for Teachers(pp. 251-264). Dubuque, IA: Kendall Hunt.
Winslow, Dickerson, and Lee (2013) explain a way to judge instructional programs. First, programs are judged on their program value, which includes "...content validity, potential effectiveness as a teaching-learning tool, and ease of use" (Winslow, Dickerson, & Lee, 2013, p. 254). Second, learning outcomes should match what the instructor intended the learning to acgueve(Winslow, Dickerson, & Lee, 2013, p. 254). Third, the program should provide comments to answers and change to the learner's needs (Winslow, Dickerson, & Lee, 2013, p. 255). Fourth, the program should allow learners to be challenged gradually, relates to the learner, and the program is able to hold concentration from the learner (Winslow, Dickerson, & Lee, 2013, p. 256). Fifth, the program follows multimedia principles which makes the program optically pleasant to the learner (Winslow, Dickerson, & Lee, 2013, p. 256). Sixth, the program allows the learner to connect with the program by easy exploring of the program and good directions (Winslow, Dickerson, & Lee, 2013, p. 257). Seventh, the program allows the learner to rework the problem situation in different ways (Winslow, Dickerson, & Lee, 2013, p. 257). Eighth, the program follows the industrial guidelines set by accredited organizations (Winslow, Dickerson, & Lee, 2013, p. 257).
Lee, C. & Cherner, T. S. (2015). A comprehensive evaluation rubric for assessing instructional apps. Journal of Information Technology Education: Research, 14, 21-53. Retrieved January 22, 2015 from http://www.jite.org/documents/Vol14/JITEV14ResearchP021-053Yuan0700.pdf.
Lee and Cherner (2015) present a comprehensive judgement system for educational multimedia programs on phones or tablets. The researchers state that past judgement tools were not researched or did not follow specific criteria. Instructional programs should be judged by "Instruction...Design...[and] Engagement... " (Lee & Cherner, 2015, p. 25). In sum, the three areas has twenty-four specific criteria which an instructional program can be judged on. However, the researchers state the instructor should spend time to understand what the application is utilized for and understand the details of the judgement system before making a decision about the instructional multimedia program.