Monday, January 21, 2013

If you could read my mind...it would be MindView!

Screenshot from MindView's PDF Manual
The software that I would like to consider for evaluation is MatchWare Education Software's MindView. Essentially, the software allows students to organize their ideas in a mind mapping technique so they can visualize, brainstorm and organize before they begin writing. Students export their visual representation as an outline to MS Word or a Rich Text Format (rtf). The software can also be used for storyboarding, presentations and timelines. As an educator who teaches English, Writing, Journalism and Media Studies, I can see various potential applications for this software. The software can be downloaded for a 20 day trial at the following link. It is available for Windows PCs (MindView 5) and Apple Macintosh computers (MindView 4).  You do have to register to download the software, but it does not ask for any financial information. For the purpose of this assignment, I will consider the use of this software within Language Arts activities/assignments and link it to the Atlantic Canada Language Arts Curriculum Outcomes:


  • Students will be expected to interpret, select, and combine information, using a variety of strategies, resources and technologies. 
  • Students will be expected to respond personally to a range of texts. 
  • Students will be expected to respond critically to a range of texts, applying their understanding of language, form and genre.
  • Students will be expected to use writing and other ways of representing to explore, clarify and reflect on their thoughts, feelings, experiences and learning; and to use their imagination. 
  • Students will be expected to create texts collaboratively and independently, using a variety of forms for a range of audiences and purposes. 
  • Students will be expected to use a range of strategies to develop effective writing and other ways of representing, and to enhance clarity, precision and effectiveness. 


Link to download a free trial of MindView (PC or Mac): http://www.matchware.com/en/downloads/default.htm

Sunday, January 13, 2013

Stumbling over software models

Screenshot of AMAC's software evaluation form.
A very comprehensive educational software evaluation form was designed by AMAC, a unit of the Enterprise Innovation Institute at Georgia Tech. The form includes an area to record the basic information of the software, including cost as well the stated target population for the software. 

AMAC's purpose is to improve the lives of individuals with disabilities "by providing technology-based products, services, and research at competitive or reduced costs." I assume the software evaluation model was developed with this goal in mind. However, the model could be adapted for other educational software evaluations. 


The model also includes what feedback does the software provide to the student in regard to performance as well as how the student's progress is monitored. Another important feature that is included is universal design considerations (Demands on the User and Adaptability).


The Technical Quality section also includes a number of important criteria such as help screens, student motivation, if the program operates without crashing, and whether the program can be operated through multiple means. 


I think one of the problems with this model is the number of pages. The entire model is eight pages (includes Appendix A and B). It would probably be possible to arrange the model in another type of layout which could reduce the number of pages. 


If the model was applied to other educational software, I think it would be beneficial to also provide an opportunity for user feedback. My sense from this model is that the feedback is completed by someone observing the student as he/she uses the software, or as an educator operates the software on a trial basis.   


During my search, I also stumbled across two other sources of information that I thought were beneficial. One is an article "Evaluation of Educational Software: Theory into Practice." The article takes into considerations the different purposes or software and also discusses different approaches to teaching. It categorizes software into four different segments and then gives suggested criteria that is required for evaluation. It cleared up a few things for me in trying to come to terms with software that may be more of a "tool for learning" versus software as a "virtual class." I also liked the article's conclusion which includes this statement: "software is powerful not because it is technologically superior but because it enables educators of different educational perspectives, to bring creative innovations into teaching and learning."


My second stumble was over Prince Edward Island's site for software evaluation. The site model includes three steps: software submission, software quality assessment, and technical and quality assessment. There are some very helpful PDF documents on this site when looking at software evaluation models. The department also notes that this process is mainly for educators or schools who which to have software approved for school network use. 


Links to materials or sites mentioned in this article. 


http://www.amacusg.org/
http://www.amac.gatech.edu/wiki/images/e/e8/Softevalform_Fall07.doc
http://eprints.utas.edu.au/1328/1/11-Le-P.pdf
http://www.edu.pe.ca/softeval/

Thursday, January 10, 2013

Whose opinion is best?

Photo from morguefile.com
After reading Deborah Lynn Stirling's article, "Evaluating Instructional Design," I am not sure if I am more or less confused about the best approach for software evaluation. I like the idea of an experimental study, but as noted in the article, how the student learns or how the software is used is not part of this approach. The effectiveness of the software is measured through the results of student achievement. Yet high student achievement does not necessarily mean the software is effective. As indicated in Wanda Y. Ginn's article "Jean Piaget - Intellectual Development," drill and practice computer software "does not fit in with an active discovery environment" and "does not encourage creativity or discovery." An instructional software could simply be drill and practice where student achievement is high, but if the evaluation approach does not include examining how the student learns, I think it possesses a major shortcoming. In contrast, Stirling's discussion of the User Surveys approach indicates that teachers do "in fact judge software based on evidence other than student achievement." While I am not always sure of the practicality of this approach, I do agree with the statement that teachers "can benefit from field testing software within their own classroom." I know from my own experience, that if I can find the time to have a few students sit down and try software before I introduce it to the whole class, I can avoid minor issues simply because I never anticipated such problems would surface. The software is ultimately going to be in the hands of the teacher and the students, so I think this approach makes the most sense, despite its challenges in terms of practical application.

The overview provided by Stirling on evaluating instructional software could be appropriate for tool/application software like word processors or spreadsheets. I think that any approach which involves both the educator and student, like the User Surveys approach, creates an opportunity for students to become, as Ginn points out, "active participants instead of passive sponges" in their learning. From my own experiences, I have introduced software to students only to have some students suggest they could do it more effectively with a different software. So if the teacher is willing to field testing software (whether it be instructional or another type of application) with his/her students, that will certainly open the door for students to be more actively involved in their learning. Additionally, the field-testing of tool/application software is getting increasingly harder and harder with the introduction of students bringing their own devices to school (BYOD). The traditional standardization of technology in many schools is slowly fading. Norris and Soloway in "Tips for BYOD K12 Programs" argue that by the year 2015 "every student in America's K-12 public school system will have a mobile device to use for curricular purposes." As students continue to be connected to the Internet along with the various Web 2.0 applications available to them, it only makes sense they will have a more active role in what application they will want to use to get the job done. It can be argued that this development of less standardization and more personalization could make field testing obsolete. I would argue, however, that this approach blends well with the Web 2.0 world where students are given the opportunity in the classroom to present applications for field-testing that the teacher might not have considered or even be aware of.

When I initially considered Stirling's conclusion on evaluating instructional software, I decided that the direct approach was more in keeping with her perspective. I took this position because the direct approach is where the teacher has control, and Stirling argues that "software evaluation should be conducted by the instructor." What is not clear to me, however, is whether Stirling is arguing for a completely new approach to software evaluation, or if she is suggesting that the last approach she discusses, User Survey (where the teacher does field testing of the software), is the approach that is best suited because the teacher/instructor conducts the evaluation. If she is suggesting the User Survey approach with field testing in the classroom, I think this could potentially be a constructivist perspective because, as Stirling concludes, "the evaluation method used should yield information about quality, effectiveness, and instructional use." This approach, in my opinion, would have to go beyond data collection involving only student achievement. The students involved in the field testing would have to be active participants in that data collection even though the approach is being conducted by the teacher. Stirling's quote from V.E. Weckworth, however, would suggest a more direct approach as the argument is made that the critical element in evaluation is "who...has the power, the influence, or the authority to decide." If Stirling is suggesting the teacher should conduct his/her evaluation with this philosophy in mind, then it would definitely be a direct approach. However, I would argue this view of power, influence and authority is greatly outdated in today's world, especially with tool/application software where younger students have as much access to powerful applications as adults and where lucrative software startups are created by people hardly out of high school.

In following the Expert Opinion approach for software evaluation, I think there are a number of criteria that should be included. I had some struggle with coming up with more than three, so I did get some help from the site, TechPudding.

  • One major concern in any educational community is budgeting, so the cost of licensing, implementation and potential upgrading should be considered. 
  • As an educator, I think how the software tracks the data, collects it and allows the teacher/school to analyze and share the data is important. 
  • Is the software user-friendly and does it include a clean layout or user interface (does it follow the typical interface structure of most existing software so the user does not have to relearn how to navigate in the software). 
  • The site TechPudding makes a great case for universal design and higher order thinking attributes. In terms of engagement and learning in the 21st century, does the software meet the needs of today's learners?