Educational Research In this paper we provide an in-depth evaluation of the article by McCoy and Sorensen (2003), titled “Policy perspectives on selected virtual universities in the United States”. The following questions are answered in this work: What is the focus of this study? Is the methodology explained in a manner that allows the reader to understand the research process that is being applied? Are the findings communicated in a clear manner? Are the conclusions of the study appropriate or in line with the results of the study? Does the author provide recommendations for future research? To begin with, this study depicts policy analysis that studied the development process of public virtual universities. The authors explain that public virtual universities are driven by the technology that is used to design and build the systems, which serve as the technical basis or the platforms of those universities. This technology consists of the human—machine interface devices that are used to present multimodal information and sense the virtual world, as well as the hardware and software used to generate the virtual environment. It also includes the techniques and electromechanical systems used in tele-robotics, which can be transferred to the design of public virtual universities’ systems, as well as the communication networks that can be used to transform the universities’ systems into shared virtual worlds.
Many of the results presented in this article stem from the use of a particular methodology for the design, evaluation, and application of interaction techniques, with the goal of optimizing performance. In order to understand the context in which the guidelines presented in the article discussed were developed, the parts of this methodology relating to design are briefly discussed. Principled, systematic design and evaluation frameworks give formalism and structure to research on interaction, rather than relying solely on experience and intuition. Formal frameworks provide not only a greater understanding of the advantages and disadvantages of current techniques, but also better opportunities to create robust and well-performing new techniques, based on the knowledge gained through evaluation. Therefore, several important design and evaluation concepts are elucidated in the following sections. As depicted by McCoy and Sorensen (2003), the first step toward formalizing the design of interaction techniques is to gain an intuitive understanding of the tasks involved with public virtual universities (basically, virtual environment) interaction and current techniques available for these tasks.
This is accomplished through experience using ITs and through observation and evaluation of groups of users. Often in this phase, informal user studies or usability tests are performed in which users are asked what they think of a particular technique or observed trying to complete a given task with the technique under study. These initial evaluation experiences are drawn upon heavily for the process of creating taxonomies and categorizations of interaction techniques. It is helpful to gain as much experience of this type as possible so that informed decisions can be made in the next phase of formalization. The next approach depicted by the authors in creating a formal framework for design and evaluation is to establish a taxonomy of interaction techniques for each interaction task (i.e., Wolf and Johnstone’s (1999) taxonomy). Such taxonomies partition the tasks into separable subtasks, each of which represents a decision that must be made by the designer of a technique. Some of these subtasks are related directly to the task itself, whereas others may only be important as extensions of the metaphor on which the technique is based.
In this sense, a taxonomy is the product of a careful task analysis. Once a task has been broken down to a sufficiently fine-grained level, listing possible methods (technique components) for accomplishing each of the lowest level subtasks completes the taxonomy. Ideally, the taxonomies established for universal tasks need to be correct, complete, and general. Any IT that can be conceived for the task should fit within the taxonomy. Thus, the subtasks will necessarily be abstract. The taxonomy will also list several possible technique components for each of the subtasks but will not claim to list each conceivable component. For example, in an object-coloring task, a taxonomy might list touching the virtual object, giving a voice command, or choosing an item in a menu as choices for the color application subtask. However, this does not preclude a technique that applies the color by some other means, such as pointing at the object. McCoy and Sorensen (2003) argue that one way to verify the generality of taxonomies is through the process of categorization— defining existing ITs within the framework of the taxonomy. If existing techniques for the task fit well into a taxonomy, one can be surer of its correctness and completeness.
Categorization also serves as an aid to evaluation of techniques. Fitting techniques into a taxonomy makes explicit their fundamental differences, thus the effect of choices can be determined in a more fine-grained manner. Taxonomies and categorization are good ways to understand the low-level makeup of ITs and to formalize the differences between them, but once they are in place, they can also be used in the design process. A taxonomy can be thought of not only as a classification but also as a design space. In other words, a taxonomy informs or guides the design of new ITs for the task, rather than relying on a sudden burst of insight. Since a taxonomy breaks a task down into separable subtasks, a wide range of designs can be considered quite quickly, simply by trying different combinations of technique components for each of the subtasks. There is no guarantee that a given combination will make sense as a complete interaction technique, but the systematic nature of the taxonomy makes it easy to generate designs and to reject inappropriate combinations.