Technology Enhanced Learning

Carrie Heeter
Michigan State University
Department of Telecommunication


2.7.5 Virtual Zoo Exhibit

College of Computing

College of Architecture

Georgia Institute of Technology [3]

This virtual learning experience was based initially on a previous Zoo Atlanta VR exhibit in which the visitor became an adolescent male gorilla and tried to approach virtual adult gorillas to see how they responded. Middle school children did not learn as much as the designers had hoped, most likely because the experience was too unstructured. When a guide pointed out behaviors and made suggestions, more learning occurred.

The next time, instead of middle school children, the immersive environment was intended to teach college students the design principles used in constructing an animal habitat within a zoo setting as part of a college course on environmental design.

Based on previous experiences, the authors now believe success depends on producing a satisfactory and believable experience for the user. In addition they recommend tightly coupling experiential learning with embedded abstract information in the virtual environment. They call this combination an information-rich environment. Student assessment of the system was overwhelmingly positive. None of the differences in test scores of learning performance were statistically significant.

For the study, 24 students were divided into three groups:

    • CONTROL = normal (n=5)

    • INFO GROUP = explore habitat then gather info within VE (n=8)

    • HABITAT GROUP = Lectures plus VE but not embedded information (n=3)

The outcome-measure comparisons were not statistically significant.

The project used some interesting design elements. Voiceholders or cubes were placed in the 3D environment and would start an audio clip when the user grabbed them. Static gorillas helped illustrate concepts and give a sense of scale. (This makes sense for designing a gorilla exhibit, but could be quite humorous in other worlds as well!)

They also innovated a navigation interface designed to prevent users getting lost or disoriented. In addition to a helmet, users held a tablet in their nondominant hand and a stylus in their dominant hand. A map of the environment was projected onto the virtual representation tablet. Children could click the stylus on the location they wanted to get to. When they release the stylus button, the system flew them to the chosen location. Navigation was easy, but users tended to face only one direction, needing to be reminded that they could look around. For pointing at and selecting objects a virtual ray of light was found to be nearly ideal for selecting an object although not for manipulating objects.


Copyright 1999 CommTech Lab @ Michigan State University.
All rights reserved.