Monday, March 25, 2013

Critique of the ABKD Group's Software Evaluation Model


Photo from morguefile.com
The ABKD group’s educational software evaluation model addresses many of the rapid changes that are occurring within software development, particularly the abundant availability of open-source Web 2.0 applications. As indicated in the ABKD group’s description of their model, they argue for a “qualitative evaluation,” while addressing the needs of teachers “and keeping an open-door for new, innovative software.” The vast majority of new, innovative software is now online, particularly in the platform of Web 2.0, and much of what is available is not specifically designed for educational purposes. Yet, educators are continually welcoming this software, and the ABKD group recognizes this evolution. They argue that this software may not be primarily designed for educational settings, but it “could be used in such a manner if the proper context and guidelines are set by the teacher.”  

The explanation in the ABKD group’s introduction that their model “favours a socio-constructivist approach to learning” coincides with Ullrich, Tan, Borau, Shen, Luo and Shen (2008) who state that learning from a constructivist’s view takes place in a social context and “the innate properties of Web 2.0 services are beneficial for learning” (p. 707). In their discussion of Web 2.0, one example that Ullrich et al. (2008) provide is the learning of a foreign language which, coincidentally, is the focus of learning in the trial evaluations the ABKD group did in relation to Wordpress and Audacity. Indeed, Ullrich et al. (2008) argue that Web 2.0 “is characterized by social learning and active participation, as advocated by constructivism” (p. 709).

One of the biggest strengths of the ABKD group’s software evaluation approach is the opportunity for the evaluator(s) to deeply explore and provide feedback based on socio-constructivist goals and outcomes, as evident in section three of their model. Question 3.3 states: “Does the software encourage exploration, experimentation, and student creation of their own knowledge? Explain how with some examples of what students can do.” From their own trial evaluations, the ABKD group illustrated the effectiveness of providing the opportunity for the evaluator to explain how the software “can help learning” as opposed to simple “yes” or “no” answers. For example, in their evaluation of WordPress, and in response to question 3.3, it states that students write posts about their learning experiences and other daily experiences which provides the opportunity for students to get to know each other better and form friendships as they communicate in a foreign language.   

Additionally, this approach was effective in section two, regarding usability. In question 2.4, on the evaluation of Audacity’s interface design and representation of the content, the evaluator makes specific comments about the application not being “visually stimulating” and “young students may find it ‘boring’ to look at.” Further observations in relation to this question explain that the icons in the interface are “unique to audio processing” and practice in the interface is necessary even though the manual editing is simple to complete.  The qualitative format of the evaluation model allows for observations from the evaluator that can be warning flags for educators who may have students who could become overwhelmed with unfamiliar and/or difficult interfaces. Ullrich et al. (2008) make reference to previous studies that have shown that “disorientation and cognitive overload are the principal obstacles of self-regulated learning in technology-enhanced learning” (p. 707).

Another example of the effectiveness of this software model’s design is apparent in section five on “Support and Continuous Development” where the evaluator must complete a number of comprehensive open-ended questions concerning on-line documentation, opportunities to provide feedback to the developer, the status of the developer’s website, available updates, etc. Question  5.3 instructs the evaluator to “Look at the developer's website and comment on recent activities. Are developers addressing concerns and problems in the forums? Do current users seem happy with the software?” These question are important not only in terms of what support and documentation is available, but also, as indicated by Stamelos, Refanidis, Katsaros, Tsoukias, Vlahavas and Pombortsis (2000), in “giving suggestions on the various teaching strategies instructors can adopt...informing how the program can be fitted into a larger framework of instruction, etc.” (p. 9). Such resources and user feedback on developers’ sites is becoming increasingly vital with the use of Web 2.0 applications in education and reflect the nature of Web 2.0 in “harnessing the power of the crowd” (Ullrich et al., 2008, p. 707).

This software evaluation model covers many issues that could arise when using Web 2.0 software and saving data to the “cloud.” The model addresses the issue of data portability in question 4.2 under “User Data and Security.” As Web 2.0 applications evolve and sometimes are even terminated, such as the recent announcement that Google will be shutting down Google Reader (Bilton, 2013), it is very important that an evaluator provides information on how data can be saved and exported. Furthermore, information must also be provided in regard to terms of service around ownership, privacy, etc. as addressed in 4.3, 4.4 and 4.5.

One of the strengths of the ABKD group’s software evaluation model could also be its biggest weakness: the process. While addressing the reality in most educational settings - where educators usually work hand in hand with technology coordinators - the model may become unwieldy in its execution. The model is divided into three parts. The instructor completes the preliminary evaluation; the educator, in conjunction with technology coordinator, completes the secondary evaluation. If the software is deemed worthy of further scrutiny, it is then tested with a pilot group of students who will complete student evaluations.

The preliminary evaluation is relatively concise, and the evaluator answers most of the form using a Likert Scale. However, while the ABKD group indicates this form is to be completed by the instructor, the title of the form provided for the results of the evaluation of WordPress indicates “ICT Coordinator” and then states in the instructions it is to be completed by “the instructor” to make “a secondary assessment.”

Aside from issues that could result in confusion with titles, terminology and when the evaluation is to take place, another concern with the process is the assumption made that “the teacher is knowledgeable in current teaching trends and best practices, and seeks to employ a constructivist pedagogy as the dominant form of instruction and learning. It also assumes the teacher is knowledgeable in the content area for which the software is intended.” Even though the teacher may be knowledgeable in current trends and practices, they may still have limitations when it comes to software evaluation and their experience and knowledge with technology and software. Tokmak, Incikabi and Yelken (2012) report that when students who were studying in education programs performed software evaluations, they did not “ provide details about specific properties in their evaluation checklist or during their presentation” and they “evaluated the software according to general impressions” (p. 1289). They concluded that education students and new teachers “should be given the opportunity to be involved in software evaluation, selection and development” (p. 1293).

One can argue that the ABKD group’s evaluation process addresses some of these concerns, since they suggest the majority of the evaluation is to be completed by the instructor in conjunction with a technology coordinator. However, it may be possible to streamline the process by incorporating the preliminary evaluation  - the “Basic Software Information” and “Features and Characteristic” - into their secondary evaluation that is to be jointly completed by the instructor and technology coordinator. New teachers and teachers who subscribe to a constructivist’s view, but who have limited experience with technology and software evaluations, may find the preliminary evaluation intimidating and/or confusing. Furthermore, the final assessment completed by the students may be best considered as an optional evaluation, since not every school environment may allow for such an evaluation to take place. This was evident even in the ABKD Group’s software evaluations, as they did not have the opportunity for students to complete the third part due to holidays.

Overall, the software evaluation model proposed by the AKBD group is a comprehensive and qualitative model that addresses the current evolving trends of mobile and cloud computing and Web 2.0 applications. It is a solid and versatile model that addresses the need for educators experienced in using technology and technology coordinators to work collaboratively in selecting, evaluating, and using software for educational purposes. With some minor changes to its design, it could also be a model that could work as a guide and motivate inexperienced or new educators to introduce the use of more technology and software in their learning environments as they gain experience in software evaluations.


References


Bilton, N. (2012). The End of Google Reader Sends Internet Into an Uproar. The New York Times. Retrieved from: http://bits.blogs.nytimes.com/2013/03/14/the-end-of-google-reader-sends-internet-into-an-uproar/


Poissant, A., Berthiaume, B., Hogg, K., & Clarke, D. (2013). Team ABKD Group 5010 CBU Winter 2013: Our Software Evaluation Model. Retrieved from: http://alexthebear.com/abkd/


Stamelos, I., Refanidis, I., Katsaros, P., Tsoukias, A., Vlahavas, I., & Pombortsis, A. (2000). An adaptable framework for educational software evaluation. Retrieved from: delab.csd.auth.gr/~katsaros/EdSoftwareEvaluation.ps


Tokmak, H. S., Incikabi, L., & Yelken, T. Y. (2012). Differences in the educational software evaluation process for experts and novice students. Australian Journal of Educational Technology, 28(8), 1283-1297. Retrieved from: http://www.ascilite.org.au/ajet/ajet28/sancar-tokmak.pdf


Ullrich, C., Xiaohong, T., Borau, K., Liping, S., Heng, L., & Shen, R. (2008). Why Web 2.0 is Good for Learning and Research: Principles and Prototypes. WWW’08 Proceedings of the 17th international conference on World Wide Web, 705-714. Retrieved from: http://wwwconference.org/www2008/papers/pdf/p705-ullrichA.pdf

Tuesday, March 12, 2013

Evaluating Web 2.0

Photo from morguefile.com
Reviews, feedback and observations are all ways in which most of us determine whether or not we might invest time and/or money in a particular product or experience. When most of us are faced with unfamiliar territory, we tend to flock to anyone who might have experienced this territory to "pick their brain." This has also been the process for a lot of us when determining what software we might use in our educational settings. Additionally, formal educational reviews of software are also useful, particularly when a large amount of money in budgets - that are already too tight - has to be shelled out for licensing, support and future upgrades.

However, with the constant evolution of the Internet along with wireless connectivity and mobile technology, the offerings of educational software, or software that can be used in an educational setting, has changed the landscape. It may not be as important to cite in a software review the operating system that is required, but rather which Internet browser works best with software applications that fall under the category of "Web 2.0."

The development of Web 2.0 applications has changed the landscape for the educator and the learner. As indicated in the article "Why Web 2.0 is Good for Learning and for Research: Principles and Protocols," Web 2.0 applications "take full advantage of the network nature of the Web: they encourage participation, are inherently social and open." Not surprisingly, as pointed out in this article, Web 2.0 applications fall in line with "modern educational theories such as constructivism and connectionism" making them ideal for use in many educational settings. Additionally, they are ready to use platforms and the "burden of designing an easy to use interface is taken from the teacher."

The popularity and continued development of Web 2.0 applications has left educators with a tremendous choice of online software that can be used in an unlimited number of creative ways for educational purposes. And most of it is free or extremely affordable.

Yet, this growing trend does not make educational reviews passe. However, how Web 2.0 applications are assessed may differ, since many of them were not developed specifically for educational use. So while an educator would like to gain information on how the application works and what its interface is like, the more important  feature of educational reviews could become how the application has been "applied" to specific educational settings. Such is the case on the Free Technology for Teachers site written by Richard Bryne. Not only does Bryne offer many different choices of online applications, but he also usually discusses how the software can be applied in the classroom.

The development of evaluation repositories would appear to coincide with the nature of Web 2.0 applications (online participation, etc). Additionally, as educators continue to implement Web 2.0 in their learning environments, it is only logical that some educators will also choose to develop their own Web 2.0 applications for particular learning outcomes. In fact, online communities, very similar to the idea of an evaluation repository, exist for information sharing and the collaborative production of Web 2.0 development, such as web2fordev.net. Moreover, there are many existing sites that focus on the use of Web 2.0 applications/tools for educational purposes, such as www.educatorstechnology.com.

When we bundle the power and popularity of Web 2.0 with the continuing focus on mobile computing and the growth of BYOD (Bring Your Own Device) in educational settings, a very strong argument can be made that the approach and focus on evaluating software for educational purposes is (and will continue) going through many changes. Indeed, as educators and students continue to embrace the power of Web 2.0, how we approach software evaluations is only one of many concerns. For example, as stated in "Critical Issues in Evaluating the Effectiveness of Technology" (critical issue 7), there is a continuing need for "policies that govern technology uses" to "keep up with classroom practices" so "innovative and effective practices" can be encouraged and continue to grow.

The use of evaluation repositories where experts, researchers, educators, learners and any other stakeholders can share, learn, discuss, argue and evaluate the effectiveness of Web 2.0 applications in our learning environments, may be one of the more important factors to help policymakers stay abreast of the continuing changes in technology, and how it is applied in educational settings. Our older practices of trying to standardize what software should be used will be out of date before most educators even receive the updates on what has been deemed acceptable software for their classrooms. The idea of following a model similar to how we choose books through evaluation for educational settings has some merit, but it cannot be a process that binds the educator's ability to decide what application would work best for his/her classroom, or to develop or customize an existing Web 2.0 platform.