Sunday, January 13, 2013

Stumbling over software models

Screenshot of AMAC's software evaluation form.
A very comprehensive educational software evaluation form was designed by AMAC, a unit of the Enterprise Innovation Institute at Georgia Tech. The form includes an area to record the basic information of the software, including cost as well the stated target population for the software. 

AMAC's purpose is to improve the lives of individuals with disabilities "by providing technology-based products, services, and research at competitive or reduced costs." I assume the software evaluation model was developed with this goal in mind. However, the model could be adapted for other educational software evaluations. 


The model also includes what feedback does the software provide to the student in regard to performance as well as how the student's progress is monitored. Another important feature that is included is universal design considerations (Demands on the User and Adaptability).


The Technical Quality section also includes a number of important criteria such as help screens, student motivation, if the program operates without crashing, and whether the program can be operated through multiple means. 


I think one of the problems with this model is the number of pages. The entire model is eight pages (includes Appendix A and B). It would probably be possible to arrange the model in another type of layout which could reduce the number of pages. 


If the model was applied to other educational software, I think it would be beneficial to also provide an opportunity for user feedback. My sense from this model is that the feedback is completed by someone observing the student as he/she uses the software, or as an educator operates the software on a trial basis.   


During my search, I also stumbled across two other sources of information that I thought were beneficial. One is an article "Evaluation of Educational Software: Theory into Practice." The article takes into considerations the different purposes or software and also discusses different approaches to teaching. It categorizes software into four different segments and then gives suggested criteria that is required for evaluation. It cleared up a few things for me in trying to come to terms with software that may be more of a "tool for learning" versus software as a "virtual class." I also liked the article's conclusion which includes this statement: "software is powerful not because it is technologically superior but because it enables educators of different educational perspectives, to bring creative innovations into teaching and learning."


My second stumble was over Prince Edward Island's site for software evaluation. The site model includes three steps: software submission, software quality assessment, and technical and quality assessment. There are some very helpful PDF documents on this site when looking at software evaluation models. The department also notes that this process is mainly for educators or schools who which to have software approved for school network use. 


Links to materials or sites mentioned in this article. 


http://www.amacusg.org/
http://www.amac.gatech.edu/wiki/images/e/e8/Softevalform_Fall07.doc
http://eprints.utas.edu.au/1328/1/11-Le-P.pdf
http://www.edu.pe.ca/softeval/

No comments:

Post a Comment