Unlike mass media where people are merely "viewers" who watch or listen to content, virtual reality experiences place you inside of the content. Our research suggests that people want to be able to experience a strong sense of self in the virtual worlds. They want to be able to see their real hands, perhaps their real bodies.
Prospective VR entertainment players would like others to be able to see their real face in the virtual world (57% of college freshmen surveyed reacted positively, with an overall average of 4.8 out of 7). But they are even more interested in seeing other people's real faces (78%, with an average of 5.7). And they want to see the "real" faces of computer-generated agents or characters (82%, with an average of 5.9).
They are divided 50:50 on whether they would prefer to be represented by their real face or a fantasy face. (Women are more likely to prefer a fantasy face; men are more likely to prefer their real face; the difference is statistically significant at p<.006.) Asked whether they would prefer to wear their regular clothes or a special costume, 48% strongly preferred a costume, while 25% strongly preferred regular clothes. Gender differences were not statistically significant.
For whichever of three 3-D second person VR experiences a CyberArts participant tried, (Undersea Adventure, Japanese Garden and Tokyo Godzilla), they were asked to rate their enjoyment, on a scale from 0 to 10 with 10 being very enjoyable. Those who could see only their shadow silhouette on average rated the experience 5.8, compared to average ratings of 8.0 for those who saw their real body . Otherwise identical experiences were considerably more enjoyable when you got to see your real self in the world.
VR experiences which use a power glove or data glove frequently show users a computer-generated hand to represent their real hand movements. 1992 SIGGRAPH participants in our whole body, 3-D mirror world installation were asked whether they would prefer to see a computer generated hand or their whole, real body. Eighty-seven percent would prefer to see their real body. Seeing your real self makes VR seem more real and more enjoyable.
The Comm Tech Lab "Hands On Hawaii" interface was an extension of that premise. If real body was preferable to computer-generated hand, probably real hands would be preferable to computer hands. Seeing your real self in second person VR is familiar, in as much people see themselves in a mirror every day. But it is far more natural to look down and see the backs of your hands than it is to be watching yourself in a mirror while you do things like explore a virtual world.
Our experience features an interface that lets users sit down at a custom-designed whale shaped kiosk, slip their hands under a curtain, and watch their hands appear in a virtual Hawaii on a screen in front of them. Both their life sized hands and the virtual world are photorealistic. Users can "touch" video-graphic objects to explore the islands and to learn about their ecosystems. The dividing line between real and virtual world is marked by the curtain you slip your hands under to enter the virtual world. We hoped the interface would create a compelling and involving sense of presence in a virtual world. And we conducted research on user reactions.
The Hands On interface concept is well suited for public installations-- there are no moving parts or delicate equipment that users touch directly, no supervisor or guide is needed to manage use, and the hygiene issues associated with goggles or gloves are not a concern. Half of the 284 technical and artistically oriented SIGGRAPH attendees who tried the interface and completed a questionnaire felt the Hands On interface could definitely be an effective museum installation, and 42% felt it could possibly be effective. Only 5% said definitely not.
Consensus on the part of these sophisticated users is that a Real Hands, Virtual Worlds interface has possibilities. Ninety-two percent felt it had possibilities to use themselves for learning; 96% felt it had possibilities for children for learning; 85% might be interested in trying their real hands inside of other interfaces like email or graphic design. One fourth definitely wanted to use this type of interface for their own learning; 38% definitely wanted to try it with other applications; and 59% definitely thought it would be good for children for learning.
The interface was a first prototype, not a perfected application. Average ease of use rating was 5.1 out of 7, where 7 was VERY DIFFICULT and 1 was VERY EASY. All of the users were first time users trying the system for a short time with minimal guidance or instruction, and they were not necessarily interested in the content.
Hands On Hawaii is a "guided discovery learning" infotainment experience based on actual for-college-credit 3 week learning experiences in the Hawaiian Islands. Users can explore the extinct and active volcanoes on Hawaii and Maui. They can explore the coastline or visit sites which show human impacts on the island ecosystem at different points in history. They can control a 70 million year animation of the birth of the islands. Grabbing the guide of their choice, users can parachute down the surface from the aerial interface to explore.
Twenty-five percent of the content choices made were for the Beaches category, accounting for 29% of total user time. The remaining choices were quite evenly divided among the other four categories: volcanoes, humans, trails and birth of an island. Sixty-one percent of the time, users chose Trista, the female student guide, rather than Larry, the professor. But those who did choose Larry spent more time with the content and enjoyed most aspects of the experience more than those who consistently chose Trista.
There are several components to the interface which use Real Hands in different ways. The most intuitive interface element is simply touching a graphical object to interact. In Hands on Hawaii, touching menu choices selects them. Touching the flying whale takes you back to the main menu. Touching the helicopter takes you up to the overhead view of the islands.
The Volcano Finger Drum was similar to Vivid Effect's full body interface drums, where touching a drum pad triggered a drum sound and visual effect. In our case, we wanted to try the concept using the hands interface. Three drum heads each played a different "volcano" sound -- one was a lava eruption; one was steam sizzling where hot lava touched the ocean; one was the sound of a pebble-sized cinders hitting the ground during a cindercone eruption. The drum could be played with three fingers. Parallel processing on the Amiga allowed for simultaneous triggering.
Within the beaches section, we built in a segment where a beach ball appears on a white sand beach. The ball would react to your touch, letting you launch it, catch it, and bounce it around on the sand. It was a totally intuitive way to use your real hands -- the edge detection was quite imperfect, so it did not work as well as desired, but it was quite effective and would work very well under a more robust system.
Content from two different hikes was included, one in linear fashion hiking into Holeacola canyon on Maui, and one nonlinear to demonstrate vegetation at different elevation levels climbing up Mona Loa on the Big Island. For the Holeacola hike, we used camera motion on still frames to give a feel for motion along the path the students were following. The Larry or Trista made comments about interesting facts observed along the way, with sound effects of hiking on a path. Graphical road sides appeared at the end of each camera motion, which, when touched, moved the observer in the direction the arrow was pointing to the next sequence. The flying whale appeared periodically to allow escape from the linear sequence of ten segments.
For Mona Loa we instead offered users choice of which of 4 elevations they visited, and in what order. The interface was based on dragging a miniature animated hiker onto one of four framed rectangular closeup shots of the surface located along a representation of the mountain. The interface transported you to the surface at that elevation, where camera motion on the slide accompanied a brief narration and hiking sounds. Touching the hiker returned you to the elevation menu.
User reactions at SIGGRAPH to the Mona Loa choices was considerably more positive than to the linear Holeacola hike -- Holeacola was ranked an average of 4.2 out of 7 by the 52 users who tried it, compared to a rating of 4.9 by the 36 who tried Mona Loa. Users interested in the Hawaiian content rated Holeacola an average of one full point higher (4.7 vs. 3.7) than those who did not care about the content. Thus, preliminary evidence suggests that users prefer the interactive to the linear interface, in a situation where both led to the same format and type of content (hiking with camera motion and narration). We plan to study more serious users (e.g., people actually interested in the content) help rule out the possibility that Holeacola was simply too long for a conference attendee's desire to try a multifaceted prototype...
Within the segment on petroglyphs carved by early Hawaiians into volcanic rock, we ended with a segment where the user could stick their finger into a bucket of white paint and draw their own petroglyph onto a rock background.
We frequently resorted to an interface method where users dragged a small graphical pointing hand onto a visual representation of a choice or action-- for example, to choose a guide, they dragged a hand onto the image of either Larry or Trista. To choose an island locale to visit, they dragged a hand onto the parachute of their choice over one of the islands. This was necessary to ensure that what got activated was intentional, to avoid accidental triggering of "hot spots" on the screen. We have no way of knowing where a user's hands are at any given moment. By making it a two step process, we added unfortunate complexity to the user's task, but helped to insure that triggered actions were intentional. Touching a graphical hand caused the graphic to stick to your real hand, moving with your hand around on the screen. The edge detection software we used has two options -- left hold or right hold, which means that the graphic sticks either to the leftmost or the rightmost edge of the user's hand. Unfortunately, this meant that users had to hold their hand in a pointing position to have good control over where they dragged the graphic. If some part of the hand other than the finger you were pointing with stuck out further than your pointing finger, the graphic object would slide down the side of your hand to the bottom of the screen. Also, fast jerky moves would release the hold, so one had to move moderately slowly to successfully drag the graphic onto a guide or parachute.
There is no question that this was the worst part of our real hands interface. On a scale from 1 to 7 where 1 was VERY EASY and 7 was VERY DIFFICULT, the average difficulty rating across 284 novice users was 5.1. Nineteen percent rated it "7" and 5% rated it "1." We would certainly prefer a more exact and simple method. In partial defense of the prototype, when one took a little time to learn how to do it (as opposed to being an impatient casual one time user for 2 minutes), accuracy and control was readily achievable. Although I appreciated the opportunity to collect data at SIGGRAPH, I am left with a desire to also test the content-intensive Hands on Hawaii prototype with a small group of children who have some interest in learning the content and plan to spend 15-20 minutes using the system. The average interest in content about Hawaii on the part of SIGGRAPH study participants was 4.1, in the middle on a 7 point scale from VERY MUCH to NOT AT ALL interested in the content about Hawaii. Not surprisingly, those who were interested in the content reported a significantly greater sense of presence in the virtual world than did those not interested in content about Hawaii. Those interested enjoyed specific experiences such as the Birth of an Island animations and Holeacola hike more, enjoyed the interface more, were more likely to think the interface would be good for museums and were more likely to report a desire to use the interface for their own learning experiences.
There is always a tension between the time it takes to learn how to use a new VR interface and the need to get to the point of the experience quickly. The problem is not unique to VR -- software designers have been wrestling with the issue since the beginning. But VR offers requires new skills to adapt to new interface methods. Particularly for entertainment experiences, but also for education, we are not accustomed to having to spend any time learning how to participate. Unless or until VR becomes commonplace and interaction methods standardize, that expectation will rarely be met.
Respondents rated the importance of different attributes that we should consider in next generation prototypes of this kind. Not surprisingly, better user control was the number one concern. Rich sound effects, motion video, more interactivity and music were next in importance. 3-D video, 3-D sound and tactile experiences (key components of reality!) were comparatively lower on average. We worked in the laboratory with 3-D video from ENTER Corporation, and found the impact in a real hands, virtual worlds interface to be stunning. But we have yet to attract funding to go shoot 3-D Hawaii, so the only experiences users had was 2-D. Motion simulators were ranked relatively unimportant, as was adding more guides with different expertise and involving other real humans in the interactive experience. Lowest rated of all was use of an immersive helmet.
I find it interesting that neither motion simulators nor immersive helmets were deemed particularly appropriate for a real hands interface. Also, including other live humans in the virtual world has consistently been ranked as extremely important, in surveys of both actual and hypothetical VR experiences. Yet, with real hands, that was low on the list. This seems to be perceived as a one person experience...
Females and males had significantly different reactions to the interface. Females reacted more positively to the real hands, virtual worlds experience along numerous variables, including desire to personally learn using this technology, preference for real hands to mice, preference for real hands to real body, and enjoyment of the different content components. Females also on average rated the importance of sound effects, motion video, music, and tactile experiences significantly higher than did males.
Attribute Significance |
Overall Average |
Female Average |
Male Average |
---|---|---|---|
Better User Control |
9.4 |
|
|
Sound Effects |
8.5 |
9.1 |
8.2 |
Motion Video |
8.3 |
8.7 |
8.1 |
More Interactivity |
8.2 |
|
|
Music |
7.8 |
8.6 |
7.5 |
3-D Video |
7.1 |
|
|
3-D Sound |
6.6 |
7.7 |
6.1 |
Tactile Experiences |
6.6 |
|
|
Motion Simulator |
5.7 |
|
|
Different Expertises |
5.6 |
|
|
Other Real People |
5.4 |
|
|
Immersive Helmet |
4.3 |
|
|
A full implementation would incorporate a larger content base, full motion 3-D video, and sequences captured specifically with this interface in mind. Hands On Hawaii is part of the Comm Tech Lab's ongoing research on spatial human interface design for hypermedia and virtual reality.