Tuesday, April 2, 2013

Revised Educational Software Evaluation Form (CTSS)

Below is our revised educational software evaluation form. If you like the form and use Google Docs, feel free to click on this link and make your own copy to edit and use.

Monday, March 25, 2013

Critique of the ABKD Group's Software Evaluation Model


Photo from morguefile.com
The ABKD group’s educational software evaluation model addresses many of the rapid changes that are occurring within software development, particularly the abundant availability of open-source Web 2.0 applications. As indicated in the ABKD group’s description of their model, they argue for a “qualitative evaluation,” while addressing the needs of teachers “and keeping an open-door for new, innovative software.” The vast majority of new, innovative software is now online, particularly in the platform of Web 2.0, and much of what is available is not specifically designed for educational purposes. Yet, educators are continually welcoming this software, and the ABKD group recognizes this evolution. They argue that this software may not be primarily designed for educational settings, but it “could be used in such a manner if the proper context and guidelines are set by the teacher.”  

The explanation in the ABKD group’s introduction that their model “favours a socio-constructivist approach to learning” coincides with Ullrich, Tan, Borau, Shen, Luo and Shen (2008) who state that learning from a constructivist’s view takes place in a social context and “the innate properties of Web 2.0 services are beneficial for learning” (p. 707). In their discussion of Web 2.0, one example that Ullrich et al. (2008) provide is the learning of a foreign language which, coincidentally, is the focus of learning in the trial evaluations the ABKD group did in relation to Wordpress and Audacity. Indeed, Ullrich et al. (2008) argue that Web 2.0 “is characterized by social learning and active participation, as advocated by constructivism” (p. 709).

One of the biggest strengths of the ABKD group’s software evaluation approach is the opportunity for the evaluator(s) to deeply explore and provide feedback based on socio-constructivist goals and outcomes, as evident in section three of their model. Question 3.3 states: “Does the software encourage exploration, experimentation, and student creation of their own knowledge? Explain how with some examples of what students can do.” From their own trial evaluations, the ABKD group illustrated the effectiveness of providing the opportunity for the evaluator to explain how the software “can help learning” as opposed to simple “yes” or “no” answers. For example, in their evaluation of WordPress, and in response to question 3.3, it states that students write posts about their learning experiences and other daily experiences which provides the opportunity for students to get to know each other better and form friendships as they communicate in a foreign language.   

Additionally, this approach was effective in section two, regarding usability. In question 2.4, on the evaluation of Audacity’s interface design and representation of the content, the evaluator makes specific comments about the application not being “visually stimulating” and “young students may find it ‘boring’ to look at.” Further observations in relation to this question explain that the icons in the interface are “unique to audio processing” and practice in the interface is necessary even though the manual editing is simple to complete.  The qualitative format of the evaluation model allows for observations from the evaluator that can be warning flags for educators who may have students who could become overwhelmed with unfamiliar and/or difficult interfaces. Ullrich et al. (2008) make reference to previous studies that have shown that “disorientation and cognitive overload are the principal obstacles of self-regulated learning in technology-enhanced learning” (p. 707).

Another example of the effectiveness of this software model’s design is apparent in section five on “Support and Continuous Development” where the evaluator must complete a number of comprehensive open-ended questions concerning on-line documentation, opportunities to provide feedback to the developer, the status of the developer’s website, available updates, etc. Question  5.3 instructs the evaluator to “Look at the developer's website and comment on recent activities. Are developers addressing concerns and problems in the forums? Do current users seem happy with the software?” These question are important not only in terms of what support and documentation is available, but also, as indicated by Stamelos, Refanidis, Katsaros, Tsoukias, Vlahavas and Pombortsis (2000), in “giving suggestions on the various teaching strategies instructors can adopt...informing how the program can be fitted into a larger framework of instruction, etc.” (p. 9). Such resources and user feedback on developers’ sites is becoming increasingly vital with the use of Web 2.0 applications in education and reflect the nature of Web 2.0 in “harnessing the power of the crowd” (Ullrich et al., 2008, p. 707).

This software evaluation model covers many issues that could arise when using Web 2.0 software and saving data to the “cloud.” The model addresses the issue of data portability in question 4.2 under “User Data and Security.” As Web 2.0 applications evolve and sometimes are even terminated, such as the recent announcement that Google will be shutting down Google Reader (Bilton, 2013), it is very important that an evaluator provides information on how data can be saved and exported. Furthermore, information must also be provided in regard to terms of service around ownership, privacy, etc. as addressed in 4.3, 4.4 and 4.5.

One of the strengths of the ABKD group’s software evaluation model could also be its biggest weakness: the process. While addressing the reality in most educational settings - where educators usually work hand in hand with technology coordinators - the model may become unwieldy in its execution. The model is divided into three parts. The instructor completes the preliminary evaluation; the educator, in conjunction with technology coordinator, completes the secondary evaluation. If the software is deemed worthy of further scrutiny, it is then tested with a pilot group of students who will complete student evaluations.

The preliminary evaluation is relatively concise, and the evaluator answers most of the form using a Likert Scale. However, while the ABKD group indicates this form is to be completed by the instructor, the title of the form provided for the results of the evaluation of WordPress indicates “ICT Coordinator” and then states in the instructions it is to be completed by “the instructor” to make “a secondary assessment.”

Aside from issues that could result in confusion with titles, terminology and when the evaluation is to take place, another concern with the process is the assumption made that “the teacher is knowledgeable in current teaching trends and best practices, and seeks to employ a constructivist pedagogy as the dominant form of instruction and learning. It also assumes the teacher is knowledgeable in the content area for which the software is intended.” Even though the teacher may be knowledgeable in current trends and practices, they may still have limitations when it comes to software evaluation and their experience and knowledge with technology and software. Tokmak, Incikabi and Yelken (2012) report that when students who were studying in education programs performed software evaluations, they did not “ provide details about specific properties in their evaluation checklist or during their presentation” and they “evaluated the software according to general impressions” (p. 1289). They concluded that education students and new teachers “should be given the opportunity to be involved in software evaluation, selection and development” (p. 1293).

One can argue that the ABKD group’s evaluation process addresses some of these concerns, since they suggest the majority of the evaluation is to be completed by the instructor in conjunction with a technology coordinator. However, it may be possible to streamline the process by incorporating the preliminary evaluation  - the “Basic Software Information” and “Features and Characteristic” - into their secondary evaluation that is to be jointly completed by the instructor and technology coordinator. New teachers and teachers who subscribe to a constructivist’s view, but who have limited experience with technology and software evaluations, may find the preliminary evaluation intimidating and/or confusing. Furthermore, the final assessment completed by the students may be best considered as an optional evaluation, since not every school environment may allow for such an evaluation to take place. This was evident even in the ABKD Group’s software evaluations, as they did not have the opportunity for students to complete the third part due to holidays.

Overall, the software evaluation model proposed by the AKBD group is a comprehensive and qualitative model that addresses the current evolving trends of mobile and cloud computing and Web 2.0 applications. It is a solid and versatile model that addresses the need for educators experienced in using technology and technology coordinators to work collaboratively in selecting, evaluating, and using software for educational purposes. With some minor changes to its design, it could also be a model that could work as a guide and motivate inexperienced or new educators to introduce the use of more technology and software in their learning environments as they gain experience in software evaluations.


References


Bilton, N. (2012). The End of Google Reader Sends Internet Into an Uproar. The New York Times. Retrieved from: http://bits.blogs.nytimes.com/2013/03/14/the-end-of-google-reader-sends-internet-into-an-uproar/


Poissant, A., Berthiaume, B., Hogg, K., & Clarke, D. (2013). Team ABKD Group 5010 CBU Winter 2013: Our Software Evaluation Model. Retrieved from: http://alexthebear.com/abkd/


Stamelos, I., Refanidis, I., Katsaros, P., Tsoukias, A., Vlahavas, I., & Pombortsis, A. (2000). An adaptable framework for educational software evaluation. Retrieved from: delab.csd.auth.gr/~katsaros/EdSoftwareEvaluation.ps


Tokmak, H. S., Incikabi, L., & Yelken, T. Y. (2012). Differences in the educational software evaluation process for experts and novice students. Australian Journal of Educational Technology, 28(8), 1283-1297. Retrieved from: http://www.ascilite.org.au/ajet/ajet28/sancar-tokmak.pdf


Ullrich, C., Xiaohong, T., Borau, K., Liping, S., Heng, L., & Shen, R. (2008). Why Web 2.0 is Good for Learning and Research: Principles and Prototypes. WWW’08 Proceedings of the 17th international conference on World Wide Web, 705-714. Retrieved from: http://wwwconference.org/www2008/papers/pdf/p705-ullrichA.pdf

Tuesday, March 12, 2013

Evaluating Web 2.0

Photo from morguefile.com
Reviews, feedback and observations are all ways in which most of us determine whether or not we might invest time and/or money in a particular product or experience. When most of us are faced with unfamiliar territory, we tend to flock to anyone who might have experienced this territory to "pick their brain." This has also been the process for a lot of us when determining what software we might use in our educational settings. Additionally, formal educational reviews of software are also useful, particularly when a large amount of money in budgets - that are already too tight - has to be shelled out for licensing, support and future upgrades.

However, with the constant evolution of the Internet along with wireless connectivity and mobile technology, the offerings of educational software, or software that can be used in an educational setting, has changed the landscape. It may not be as important to cite in a software review the operating system that is required, but rather which Internet browser works best with software applications that fall under the category of "Web 2.0."

The development of Web 2.0 applications has changed the landscape for the educator and the learner. As indicated in the article "Why Web 2.0 is Good for Learning and for Research: Principles and Protocols," Web 2.0 applications "take full advantage of the network nature of the Web: they encourage participation, are inherently social and open." Not surprisingly, as pointed out in this article, Web 2.0 applications fall in line with "modern educational theories such as constructivism and connectionism" making them ideal for use in many educational settings. Additionally, they are ready to use platforms and the "burden of designing an easy to use interface is taken from the teacher."

The popularity and continued development of Web 2.0 applications has left educators with a tremendous choice of online software that can be used in an unlimited number of creative ways for educational purposes. And most of it is free or extremely affordable.

Yet, this growing trend does not make educational reviews passe. However, how Web 2.0 applications are assessed may differ, since many of them were not developed specifically for educational use. So while an educator would like to gain information on how the application works and what its interface is like, the more important  feature of educational reviews could become how the application has been "applied" to specific educational settings. Such is the case on the Free Technology for Teachers site written by Richard Bryne. Not only does Bryne offer many different choices of online applications, but he also usually discusses how the software can be applied in the classroom.

The development of evaluation repositories would appear to coincide with the nature of Web 2.0 applications (online participation, etc). Additionally, as educators continue to implement Web 2.0 in their learning environments, it is only logical that some educators will also choose to develop their own Web 2.0 applications for particular learning outcomes. In fact, online communities, very similar to the idea of an evaluation repository, exist for information sharing and the collaborative production of Web 2.0 development, such as web2fordev.net. Moreover, there are many existing sites that focus on the use of Web 2.0 applications/tools for educational purposes, such as www.educatorstechnology.com.

When we bundle the power and popularity of Web 2.0 with the continuing focus on mobile computing and the growth of BYOD (Bring Your Own Device) in educational settings, a very strong argument can be made that the approach and focus on evaluating software for educational purposes is (and will continue) going through many changes. Indeed, as educators and students continue to embrace the power of Web 2.0, how we approach software evaluations is only one of many concerns. For example, as stated in "Critical Issues in Evaluating the Effectiveness of Technology" (critical issue 7), there is a continuing need for "policies that govern technology uses" to "keep up with classroom practices" so "innovative and effective practices" can be encouraged and continue to grow.

The use of evaluation repositories where experts, researchers, educators, learners and any other stakeholders can share, learn, discuss, argue and evaluate the effectiveness of Web 2.0 applications in our learning environments, may be one of the more important factors to help policymakers stay abreast of the continuing changes in technology, and how it is applied in educational settings. Our older practices of trying to standardize what software should be used will be out of date before most educators even receive the updates on what has been deemed acceptable software for their classrooms. The idea of following a model similar to how we choose books through evaluation for educational settings has some merit, but it cannot be a process that binds the educator's ability to decide what application would work best for his/her classroom, or to develop or customize an existing Web 2.0 platform.


Saturday, February 23, 2013

Are Weebly Reviews Wobbly?

Website Reviews: Weebly
by Simon Wright
http://www.helium.com/items/1191856-is-the-free-personal-website-creation-service-weebly-any-good

Screen shot of Weebly for Education
This review was written in 2008 and, therefore, does not target the newer educational version of Weebly. However, aside from the class management features for educators with Weebly Educational, the platform itself is the pretty much the same as the original Weebly. This online review did shed more light on the help features than our review. It was clear that the reviewer had invested more time in Weebly than we did, as Wright created his own site on Weebly as opposed to a "trail run." Therefore, his experience with the platform allowed him to comment a bit more on the limitations with the help features. Yet, unlike in our review, Wright does not give any specific details in regard to problems with the help feature like we did. As mentioned in our review, the author does comment on the fact that Weebly is essentially a free service and, for a fee, you can also unlock a few more features that some users might desire. Although not a review directed to educators, Wright does spend some time commenting on the usability of the site, citing how easy the program is to use and explaining some of the features. Unlike our own experience, he does point out some issues in regard to work being lost when the program crashes. Lost work was never an issue when we did a trial run with the platform, but perhaps this issue has been ironed out in the last few years. Wright mentions that one attraction to using Weebly was the range of templates the platform offers. This was also observed in our own experience, and many of our students made positive comments as well when they provided feedback. While the author makes no direct comments on how effective Weebly would be in a learning environment, it is interesting to note that Wright discusses, in his closing remarks, how a website allows a writer to showcase his work. This was also something that many of our students discussed when considering applications for Weebly in their learning environment, and it was also part of our criteria for the evaluation.

Weebly Review
By Erez Zukerman
http://www.pcadvisor.co.uk/reviews/software/3322448/weebly-review/

Considered an "Expert Review"from 2011, this evaluation devotes a lot of time describing the design of the platform and the author explains some of the positive attributes to this design and some of the limitations. The majority of the review focuses on the usability of Weebly. Zukerman also mentions that if the user has some knowledge of HTML and CSS, a custom theme can be be created and a built-in code editor is an available feature. This is an interesting observation and was not explored in our review. Like our review, the review also discusses the fact that Weebly is essentially a free service, but additional features are available for  a price. Unlike our review, Zukerman discusses the ability to work collaboratively on your website with other users. While this is not a review pinpointing the Weebly educational version, this collaborative feature would offer many possibilities in the classroom setting or if students were working with other students from another school, etc. Like our review, the author does suggest other software platforms for web creation, including WordPress. He does stress, however, that Weebly allows the user to produce a product that looks more unique than what is offered with WordPress. Aside from the "Expert Review," readers can also check out "User Reviews" of the software as well as the final verdict on the software, related products and technical specifications. It is interesting to note that none of the related products listed in this review correspond to the alternative software we suggested in our review. Under technical specifications, the software only lists Window operating systems for Weebly. This is an error and was also an issue that came up when we were trying to choose what to select on our own evaluation model. We choose Windows, but, like this review, the selection is misleading because, as a web-based application, it should work on any operating system with a supported browser. We found that Weebly worked poorly in Internet Explorer compared to its performance in Safari on a Macintosh system with OS 10.8.2 or Google Chrome on Windows 7. This issue was not discussed in this review, nor Wright's review.

Weebly Review - The Website Builder that makes Web Design Fun
By Mike Johnston
http://www.cmscritic.com/weebly-review/


Written in 2012, this review focuses completely on usability and has a very engaging approach that includes screen shots from the program as the author discusses the different features that are offered. As specified in our review, Johnston stresses the ease of use of Weebly and how it is a great choice for users who are unfamiliar with HTML, but still want to create a professional looking site. Yet while the author effectively displays the features of Weebly through screenshots, he fails to mention that some of the elements displayed are only available under a monthly payment program. Additionally, Johnston never explores the help features of Weebly, and while he comments on how impressed he was with the speed of Weebly, he never discusses what operating system or browser he used or the technical specifications of his computer. As in the first review, Johnston does discuss the ability to edit the HTML/CSS directly. It is clear through this review that the author loves the features of Weebly and how the platform operates, and he makes that very clear in his concluding remarks. While we also were impressed with Weebly's usability our overall conclusion was not as positive as Johnston's and we also offered alternative software suggestions, which are not included in this review. 

And, finally...

For some reason it would not seem right if I did not include some educators who have discussed Weebly and how they have used it in their classrooms. So while the links below are not formal evaluations of Weebly, they do offer practical suggestions on how Weebly can be used. These are suggestions and observations that mainly originate from actual experiences. I found it difficult to find formal evaluations that looked at the educational version of Weebly. So I think it is only fitting to include a few of many sites that explore how Weebly has been used in the classroom. There is no doubt that the links below will include comments which are very subjective and bias in nature. Yet, I do not think any of the authors, many of whom are also educators, make any claims to objectivity. However, one could argue that a high level of objectivity would be expected from the "expert" reviews situated above. I would argue that this is not always the case, particularly in the last review by Johnston. While both Wright and Zukerman do conclude with overall positive conclusions about Weebly, they do make an effort to point out some shortcomings. This was the same with our own evaluation. However, Johnston's review is blatantly positive, and while I think he did one of the best jobs on not only discussing, but illustrating (through a static page) the ease of use of the platform, he failed to point out some of the limitations and flaws - ones which he himself had to have experienced when using the software. 
Illustration explaining the science behind
Wobbling Weebles (Wikipedia).  

I think, overall, educators would find the three Weebly reviews discussed helpful. However, I did find it interesting that the primary focus of the reviews was on usability. This is, of course, a very important criterion, and gives teachers an idea on how "user friendly" the software is, but other criteria has to be considered when selecting software for a learning environment. To be fair, none of the reviews are for the educational version of Weebly, so while the reviews do offer information that would be helpful to an educator considering this software, ultimately the sites listed below would provide a more practical component, including suggestions on how Weebly can be used with students. It is not unlike one teacher asking another teacher how they used a resource in their teaching methods. Ultimately, software is a resource and the best source for guidance is educators who have already used the software.

Below are the links to discussions about Weebly (many by educators), and the last link, added more for  fun than for any other reason, is an article by Ryan Dube who pokes fun at some reviewers (including Erez Zukerman) and then explores web designs created using the Weebly platform that actually do not "suck." 

Tool of the Month - May 2012
http://www.theconsultants-e.com/resources/ToolsResources/toolmay2012.aspx

Teacher First Review - Weebly
http://www.teachersfirst.com/single.cfm?id=12342

Tales From a 21st Century Teacher
http://wrteacher.wordpress.com/tag/web-design/

Natural History Day and Weebly
http://nhd.weebly.com

Journey of a Science Teacher
http://journeyofascienceteacher.blogspot.ca/2011/07/weebly-for-education.html

Free Technology for Teachers
http://www.freetech4teachers.com/2009/10/weebly-for-education.html#.USmQL6XrbzI

9 Weebly Websites That Actually Don't Suck
http://www.makeuseof.com/tag/top-10-weebly-websites-suck/

Wednesday, February 6, 2013

Monday, January 21, 2013

If you could read my mind...it would be MindView!

Screenshot from MindView's PDF Manual
The software that I would like to consider for evaluation is MatchWare Education Software's MindView. Essentially, the software allows students to organize their ideas in a mind mapping technique so they can visualize, brainstorm and organize before they begin writing. Students export their visual representation as an outline to MS Word or a Rich Text Format (rtf). The software can also be used for storyboarding, presentations and timelines. As an educator who teaches English, Writing, Journalism and Media Studies, I can see various potential applications for this software. The software can be downloaded for a 20 day trial at the following link. It is available for Windows PCs (MindView 5) and Apple Macintosh computers (MindView 4).  You do have to register to download the software, but it does not ask for any financial information. For the purpose of this assignment, I will consider the use of this software within Language Arts activities/assignments and link it to the Atlantic Canada Language Arts Curriculum Outcomes:


  • Students will be expected to interpret, select, and combine information, using a variety of strategies, resources and technologies. 
  • Students will be expected to respond personally to a range of texts. 
  • Students will be expected to respond critically to a range of texts, applying their understanding of language, form and genre.
  • Students will be expected to use writing and other ways of representing to explore, clarify and reflect on their thoughts, feelings, experiences and learning; and to use their imagination. 
  • Students will be expected to create texts collaboratively and independently, using a variety of forms for a range of audiences and purposes. 
  • Students will be expected to use a range of strategies to develop effective writing and other ways of representing, and to enhance clarity, precision and effectiveness. 


Link to download a free trial of MindView (PC or Mac): http://www.matchware.com/en/downloads/default.htm

Sunday, January 13, 2013

Stumbling over software models

Screenshot of AMAC's software evaluation form.
A very comprehensive educational software evaluation form was designed by AMAC, a unit of the Enterprise Innovation Institute at Georgia Tech. The form includes an area to record the basic information of the software, including cost as well the stated target population for the software. 

AMAC's purpose is to improve the lives of individuals with disabilities "by providing technology-based products, services, and research at competitive or reduced costs." I assume the software evaluation model was developed with this goal in mind. However, the model could be adapted for other educational software evaluations. 


The model also includes what feedback does the software provide to the student in regard to performance as well as how the student's progress is monitored. Another important feature that is included is universal design considerations (Demands on the User and Adaptability).


The Technical Quality section also includes a number of important criteria such as help screens, student motivation, if the program operates without crashing, and whether the program can be operated through multiple means. 


I think one of the problems with this model is the number of pages. The entire model is eight pages (includes Appendix A and B). It would probably be possible to arrange the model in another type of layout which could reduce the number of pages. 


If the model was applied to other educational software, I think it would be beneficial to also provide an opportunity for user feedback. My sense from this model is that the feedback is completed by someone observing the student as he/she uses the software, or as an educator operates the software on a trial basis.   


During my search, I also stumbled across two other sources of information that I thought were beneficial. One is an article "Evaluation of Educational Software: Theory into Practice." The article takes into considerations the different purposes or software and also discusses different approaches to teaching. It categorizes software into four different segments and then gives suggested criteria that is required for evaluation. It cleared up a few things for me in trying to come to terms with software that may be more of a "tool for learning" versus software as a "virtual class." I also liked the article's conclusion which includes this statement: "software is powerful not because it is technologically superior but because it enables educators of different educational perspectives, to bring creative innovations into teaching and learning."


My second stumble was over Prince Edward Island's site for software evaluation. The site model includes three steps: software submission, software quality assessment, and technical and quality assessment. There are some very helpful PDF documents on this site when looking at software evaluation models. The department also notes that this process is mainly for educators or schools who which to have software approved for school network use. 


Links to materials or sites mentioned in this article. 


http://www.amacusg.org/
http://www.amac.gatech.edu/wiki/images/e/e8/Softevalform_Fall07.doc
http://eprints.utas.edu.au/1328/1/11-Le-P.pdf
http://www.edu.pe.ca/softeval/

Thursday, January 10, 2013

Whose opinion is best?

Photo from morguefile.com
After reading Deborah Lynn Stirling's article, "Evaluating Instructional Design," I am not sure if I am more or less confused about the best approach for software evaluation. I like the idea of an experimental study, but as noted in the article, how the student learns or how the software is used is not part of this approach. The effectiveness of the software is measured through the results of student achievement. Yet high student achievement does not necessarily mean the software is effective. As indicated in Wanda Y. Ginn's article "Jean Piaget - Intellectual Development," drill and practice computer software "does not fit in with an active discovery environment" and "does not encourage creativity or discovery." An instructional software could simply be drill and practice where student achievement is high, but if the evaluation approach does not include examining how the student learns, I think it possesses a major shortcoming. In contrast, Stirling's discussion of the User Surveys approach indicates that teachers do "in fact judge software based on evidence other than student achievement." While I am not always sure of the practicality of this approach, I do agree with the statement that teachers "can benefit from field testing software within their own classroom." I know from my own experience, that if I can find the time to have a few students sit down and try software before I introduce it to the whole class, I can avoid minor issues simply because I never anticipated such problems would surface. The software is ultimately going to be in the hands of the teacher and the students, so I think this approach makes the most sense, despite its challenges in terms of practical application.

The overview provided by Stirling on evaluating instructional software could be appropriate for tool/application software like word processors or spreadsheets. I think that any approach which involves both the educator and student, like the User Surveys approach, creates an opportunity for students to become, as Ginn points out, "active participants instead of passive sponges" in their learning. From my own experiences, I have introduced software to students only to have some students suggest they could do it more effectively with a different software. So if the teacher is willing to field testing software (whether it be instructional or another type of application) with his/her students, that will certainly open the door for students to be more actively involved in their learning. Additionally, the field-testing of tool/application software is getting increasingly harder and harder with the introduction of students bringing their own devices to school (BYOD). The traditional standardization of technology in many schools is slowly fading. Norris and Soloway in "Tips for BYOD K12 Programs" argue that by the year 2015 "every student in America's K-12 public school system will have a mobile device to use for curricular purposes." As students continue to be connected to the Internet along with the various Web 2.0 applications available to them, it only makes sense they will have a more active role in what application they will want to use to get the job done. It can be argued that this development of less standardization and more personalization could make field testing obsolete. I would argue, however, that this approach blends well with the Web 2.0 world where students are given the opportunity in the classroom to present applications for field-testing that the teacher might not have considered or even be aware of.

When I initially considered Stirling's conclusion on evaluating instructional software, I decided that the direct approach was more in keeping with her perspective. I took this position because the direct approach is where the teacher has control, and Stirling argues that "software evaluation should be conducted by the instructor." What is not clear to me, however, is whether Stirling is arguing for a completely new approach to software evaluation, or if she is suggesting that the last approach she discusses, User Survey (where the teacher does field testing of the software), is the approach that is best suited because the teacher/instructor conducts the evaluation. If she is suggesting the User Survey approach with field testing in the classroom, I think this could potentially be a constructivist perspective because, as Stirling concludes, "the evaluation method used should yield information about quality, effectiveness, and instructional use." This approach, in my opinion, would have to go beyond data collection involving only student achievement. The students involved in the field testing would have to be active participants in that data collection even though the approach is being conducted by the teacher. Stirling's quote from V.E. Weckworth, however, would suggest a more direct approach as the argument is made that the critical element in evaluation is "who...has the power, the influence, or the authority to decide." If Stirling is suggesting the teacher should conduct his/her evaluation with this philosophy in mind, then it would definitely be a direct approach. However, I would argue this view of power, influence and authority is greatly outdated in today's world, especially with tool/application software where younger students have as much access to powerful applications as adults and where lucrative software startups are created by people hardly out of high school.

In following the Expert Opinion approach for software evaluation, I think there are a number of criteria that should be included. I had some struggle with coming up with more than three, so I did get some help from the site, TechPudding.

  • One major concern in any educational community is budgeting, so the cost of licensing, implementation and potential upgrading should be considered. 
  • As an educator, I think how the software tracks the data, collects it and allows the teacher/school to analyze and share the data is important. 
  • Is the software user-friendly and does it include a clean layout or user interface (does it follow the typical interface structure of most existing software so the user does not have to relearn how to navigate in the software). 
  • The site TechPudding makes a great case for universal design and higher order thinking attributes. In terms of engagement and learning in the 21st century, does the software meet the needs of today's learners?