CV-16 - Virtual and Immersive Environments

A virtual environment (VE) is a 3D computer-based simulation of a real or imagined environment in which users can navigate and interactive with virtual objects. VEs have found popular use in communicating geographic information for a variety of domain applications. This entry begins with a brief history of virtual and immersive environments and an introduction to a common framework used to describe characteristics of VEs. Four design considerations for VEs then are reviewed: cognitive, methodological, social, and technological. The cognitive dimension involves generating a strong sense of presence for users in a VE, enabling users to perceive and study represented data in both virtual and real environments. The methodological dimension covers methods in collecting, processing, and visualizing data for VEs. The technological dimension surveys different VE hardware devices (input, computing, and output devices) and software tools (desktop and web technologies). Finally, the social dimension captures existing use cases for VEs in geo-related fields, such as geography education, spatial decision support, and crisis management.

Author and Citation Info: 

Stachoň, Z., Kubicek, P., and Herman, L. (2020).  Virtual and Immersive Environments.The Geographic Information Science & Technology Body of Knowledge (3rd  Quarter 2020 Edition), John P. Wilson (ed.). DOI: 10.22224/gistbok/2020.3.9.

This entry was published on September 28, 2020.  

An earlier version can be found at:

DiBiase, D., DeMers, M., Johnson, A., Kemp, K., Luck, A. T., Plewe, B., and Wentz, E. (2006). Virtual and immersive environments. The Geographic Information Science & Technology Body of Knowledge. Washington, DC: Association of American Geographers. (2nd Quarter 2016, first digital).

Topic Description: 
  1. Definitions
  2. Introducing Virtual and Immersive Environments
  3. Principles and Implications of 3D Vision
  4. Technologies for Virtual Environments
  5. Applications of Virtual and Immersive Environments

 

1. Definitions

avatar: a personalized graphical representation of a virtual reality user; a character or alter ego that represents the user

augmented reality (AR): a real-world environment enhanced by computer generated information

Cave Automatic Virtual Environment (CAVE): a virtual reality environment where images are projected onto three-to-six of the walls of a room-sized cube

cybersickness: the discomfort or nausea caused by using virtual reality technologies

depth cues: visual and/or other sensory information that supports perception of three-dimensional data

extended reality: eXtended Reality (XR) is an embracive term covering the entire spectrum of realities including virtual, augmented, and mixed reality; XR integrates cyber and physical environments using computers and wearables

first person view: a graphical perspective from the point of view of a user’s avatar/person

head-mounted display (HMD): a display device worn on the head or as part of a helmet that has a small visual display in front of one eye or both eyes

imagination: the combination of the user’s pre-existing knowledge and new information gained by the user from various sensors in the VR

immersion: the sensation of being in an environment, feeling surrounded by it, and perceiving it as a whole

immersive virtual environments (IVEs): a VE supporting a near true-to-life level of immersion

information density: both the realism of a VE and the level of detail

intelligence: the ability of the displayed objects to refer to the contextual behavior of other objects in the VR, defining their ability to change, adapt, and react according to specific conditions (proximity of the VR user, external sensors, etc.)

interactivity: the ability of a computer (or virtual environment) to respond to a user’s input

mixed reality: Mixed Reality (MR) environment presents real world and virtual world objects together within a single display

motion capture (MoCap): the process of recording the movement of people or objects

presence: the sense of “being somewhere” induced by immersion

pseudo 3D visualization (also referred to as monoscopic 3D visualization, or 2.5D): visualizations displayed perspective-monoscopically on flat media, such as computer screens

real 3D visualization: visualizations providing both monocular and at least one binocular depth cues (also referred to as stereoscopic 3D visualization, or True 3D)

shutter glasses: a device for displaying stereoscopic 3D images; these devices employ alternate frame sequencing that displays an image to the left eye while blocking the right eye’s view, then displaying the image to the right eye while blocking the left, and repeating this rapidly so that the alternating images do not interfere with the perceived fusion of the two images into a single 3D image

virtual environment (VE): a 3D computer-based simulation of a real or imagined environment that users can navigate through and interact with

virtual reality (VR): a medium composed of interactive computer simulations that detect the participant’s position and movements and replace or augment feedback to one or more senses to induce the feeling of being mentally immersed or present in the simulation; VRs are subset of possible VEs

visual cues: cues in an environment the support perception of depth and three-dimensions

monocular cues: visual cues that are visible in 2D representation and observable with just one eye

binocular cues: visual cues that are processed by two eyes in three dimensions

 

2. Introducing Virtual and Immersive Environments

Virtual environments (VEs) allow users to step into and interact with worlds that are otherwise not accessible or may not even exist (Figure 1). A certain level of immersion is a prerequisite of any VE. Immersive virtual environments (IVEs) are an emerging subset of VEs characterized by a near true-to-life level of immersion now possible due to advances in display and hardware controls (Section 3.1 and especially Figure 5). IVEs cover a large variety of computer-based simulations from entirely virtual environments to real environments enhanced with digital information. This entry provides an overview of the history, characteristics, and design considerations for VEs generally following the structure first presented in Çöltekin et al. (2020).

imaginary virtual environment

Figure 1. Participant engaged with an imaginary virtual environment. Source: authors.

 

In 1965, Ivan Sutherland presented the idea of developing an environment that “appears real, sounds real, feels real, and responds realistically to the viewer’s actions” (adapted from Brooks 1999, p.17) marking the first milestone for virtual reality (VR). To realize this idea, Sutherland constructed the first head mounted display (HMD), which supported stereoscopic viewing and updated images according to the user’s head orientation and position. Since then, extensive research has been conducted in VEs alongside development of various technologies to support VEs (see Milgram and Kishino 1994; Mazuryk and Gervautz 1996, Çöltekin et al. 2020 for reviews). One of the historically most successful technologies was the Cave Automatic Virtual Environment (CAVE) virtual reality system (Cruz-Neira et al., 1992), which projected stereoscopic images onto three-to-six walls surrounding the user.

Another wave of interest in VEs arrived with the technological development of smartphones (see Mobile Maps & Responsive Design). Since 2014, this industry has accelerated the widespread use of relatively low-cost, high-resolution, portable VR devices in many fields, including sciences, art, gaming (entertainment), and social networking. Such augmented reality (AR; for details, see Location-Based Services), a term first proposed in 1990, uses mobile devices to enhance real environments with digital information (Lee, 2012).

VR technologies are continually evolving. VR should therefore be defined independently of the specific technology or devices that may become obsolete in a very short time. Sherman and Craig (2002, see further) described VR according to four key parameters: virtual world, immersion, sensory feedback (visual, haptic, movement), and interaction. Each of these parameters may cross specific VR setups and accordingly impact the effectiveness of their use. Interactivity differs from sensory feedback, for example, in that it involves the user’s voluntary activity (see User Interface & User Experience Design). By contrast, sensory feedback is the automated response of the system to VR internal tracking devices (head and body position). Another defining factor for VR is the levels of visual realism affecting user immersion and performance (Lokka et al., 2018).

Some authors have investigated the concept of immersive virtual environments (IVEs), although for other authors, immersion is in itself a prerequisite of VR. The immersive quality of a VR system depends on its technical parameters, for example, stereoscopic (real 3D) visualization, refresh rate, level of realism (Figure 2), field of view (preferably first-person view), and tracking (Cummings and Bailenson, 2016).

level of detail level of realism information density

Figure 2.  The relationship among Level of Detail (LoD), Level of Realism, and Information Density. Source: authors.

 

In cartography, MacEachren et al. (1999) introduced the term geospatial VE (GeoVE) and proposed a framework to describe virtual environments with four characteristics: immersion, interactivity, information density, and intelligence of the displayed objects. These four “I’s” partially overlap with Sherman and Craig’s (2002) definition (immersion and interactivity):

  • immersion: the sensation of being in an environment, feeling surrounded by it and perceiving it as a whole;
  • interactivity: the ability of a computer (or virtual environment) to respond to a user’s input;
  • information density: both the realism of a visualization and the level of detail; and
  • intelligence: the ability of the displayed objects to refer to the contextual behavior of other objects in the VR, defining their ability to change, adapt, and react according to specific conditions (proximity of the VR user, external sensors, etc.).

Besides those mainly technology-driven “I’s”, imagination is discussed as another psychological phenomenon influencing the VR applications. The imagination of the VR user describes the combination of the user’s pre-existing knowledge and new information gained by the user from various sensors in the VR (Sheridan 2000, Cowan and Ketron 2019). By increasing sensory input available in VR (i.e., vision, taste, touch, smell, and hearing), virtual experiences become more immersive and realistic, promoting imagination.

The VE’s realism (and application) is enhanced if a greater number of users can interact with the environment, resulting in what is known as a collaborative virtual environment (CVE). CVEs enable presence, interaction, and cooperation between several users represented by avatars in the same virtual space. The basic components of a VR system are shown in Figure 3.

 

components of a VR system

Figure 3. General Components of a VR System. Source: authors.

 

3. Principles and Implications of 3D Vision

The ability to use VEs effectively is closely associated with the principles of 3D vision. The human ability to perceive the world in three dimensions is linked to depth perception supported by visual cues. Visual cues are classified into two primary groups: monocular and binocular. Monocular cues are visible in 2D representation and observable with just one eye. Binocular cues are processed by two eyes in three dimensions. Monocular cues are grouped into static cues (e.g., lighting, size, occlusion/interposition, relative size of objects, linear perspective, texture gradients, aerial perspective) and dynamic cues (e.g., motion parallax, also referred to as kinetic depth). Binocular cues include binocular disparity and convergence. The number of visual cues present in a VE enables users to distinguish between pseudo 3D (monoscopic) visualizations using only monocular cues and real 3D (stereoscopic) visualizations exploiting both binocular and monocular depth cues. Both types of visualization can be considered equivalent in terms of information, i.e., they can present exactly the same amount of spatial information. However, each involves different cognitive processing of the perceived visualization.

Vision-based eye movements also can be captured in three-dimensions as an interactive input for information retrieval about objects in the VE (see UI/UX Design). Combining navigational interactivity (i.e., panning, zooming, rotating the view) with updated real 3D (stereoscopic) or pseudo 3D (monoscopic) cues improves interaction affordances and feedback while traversing the VE, improving task performance (Figure 4).

 

depth cues of 3d environments

Figure 4. The possible depth cues in different types of 3D environments. Source: authors.

 

4. Technologies for Virtual Environments

VEs are complex systems involving a variety of hardware and software solutions (see below).

4.1 Hardware

Three key hardware components are needed to create VE (LaValle, 2019):

  • Input devices for obtaining information from the real world. These devices are handled by the user.
  • Computing unit for processing input data and creating outputs.
  • Output devices for stimulating user senses (user perception).

 4.1.1 Input devices

Keeping track of user motion is a crucial part of any VE system. Tracking devices monitor and capture the position and orientation of the user’s point of view or the movement and orientation of their entire body. This attribute of a VE system is referred to as Motion Capture (MoCap).

Motion capture is very important when a user wears an HMD. Tracking in a VE system can function on different detection principles, for example, acoustic tracking, electromagnetic tracking, and mechanical tracking. The most frequently used method is optical tracking, which works on the principle of tracking reflective points in a device’s visual field. These points are placed on the user’s body or a device (e.g., HMD, control devices). Examples of devices using optical tracking include MS Kinect and Leap Motion. Another important part of the tracking system is the Inertial Measuring Unit (IMU), which detects the current rate of acceleration and changes in rotation. Although used in advanced VE systems, IMUs are not a component in low-cost solutions, such as Google Cardboard. This is a major disadvantage of these devices.

As mentioned above, motion tracking in a VE system often is combined with specific interactive input devices. VE input devices with more than two degrees of freedom (DoF; i.e., up-down and left-right) improve navigation and interaction in a VE. The interaction devices should allow users to use six DoF, for example, with a 3D mouse. 3D mouse controllers have a pressure-sensitive handle that permits movement through a virtual environment or interaction with a 3D model. Examples of devices that provide between two and six DoF are HTC VIVE, Wii Remote Controllers, and Oculus Touch. These provide three DoF (movement in three dimensions, without rotation). Other game controllers, such as gamepads and joysticks, also can support more than two DoF. The exact number of supported DoF depends on the particular device.

Traditional desktop control devices that provide only two DoF, such as computer mice, keyboards, touchpads, and touchscreens, frequently are used to control VR systems. Here, 3D movements  are transferred to special buttons or shortcut keys.

4.1.2 Computing unit

The computing unit is the hardware used to process data input and subsequently create visual output. These tasks place great demands on the central processing unit (CPU), memory (RAM), and especially the graphics processing unit (GPU). The GPU (see Graphics Processing Units) is an important hardware component in creating any VE. Current computer builds use tens of GB of memory, multi-core Intel and AMD processors, and high-end graphics cards such as NVidia GeForce RTX 2080 or AMD Radeon VII. The rendering rates of VE have a significant impact on the level of presence conveyed by VE applications, as depicted images must be displayed with a high frame rate (with lower latencies) to preserve the continuous illusion of reality and prevent cybersickness. Screen resolutions may also be adjusted, as some technologies require greater pixel counts (e.g., CAVE). Computing units are connected to input and output devices via high-speed cables (HDMI, USB 3.0).

4.1.3 Output devices

Display devices are the most important output devices for virtual environments and Cartography and Visualization broadly. Other output devices, such as audio systems or haptic output interfaces, generally only are used to support visual feedback (see Symbolization & the Visual Variables). Display devices can be categorized into traditional displays, various semi-immersive systems, and HMDs (Figure 5).

 

Classification of output devices used to provide virtual reality.

Figure 5. Classification of output devices used to provide virtual reality. The display devices are most common within geographic information science. Source: authors.

 

Traditional (2D) display devices include computer monitors and projectors. These devices only offer a limited display area and use monocular depth cues. The VEs on these displays do not look genuinely 3D and provide the lowest immersion, and therefore often are described as non-immersive devices when used for VR.

Systems that include shutter glasses and 3D monitors or CAVEs are based on stereoscopy and binocular depth cues. The displayed space may consist of a single display surface or multiple surfaces, although the user is not completely surrounded. These systems provide a greater level of immersion than typical display devices and are described as semi-immersive devices. This category includes various types of systems such as wall-sized displays, CAVE systems, cabin VR, and other simulators. All of these systems require the user to wear 3D glasses, which may be either passive or active devices. A 3D monitor combined with NVIDIA 3D Vision Wireless Glasses is an example of an active system that projects unique images to each eye. Passive systems employ glasses with filters that polarize each line of pixels. This filter makes the odd lines of pixels visible only to the left eye and the even lines of pixels visible only to the right eye. Dolby 3D is an example of a passive system.

As described above, HMDs are devices that currently provide the highest level of immersion. They allow users to be cognitively separated from the real world and be completely engulfed by the virtual environment. Virtual reality HMDs are a type of helmet with small display optics in front of each eye. Different types of HMDs can be distinguished according to computing power and device mobility. Examples of mobile devices are Google Cardboard, Google Daydream, Lenovo Mirage Solo, Oculus Go, and Samsung GearVR. Examples of stationary devices are Fove0, HTC Vive, PlayStation VR, and Oculus Rift. Mobile HMD devices are usually cheaper and easily available, though they do not provide the most immersive or comfortable experience.

The most suitable output device is selected chiefly according to the purpose of the displayed VE. When VE is used with mobile applications, the use of mobile devices (smartphone, tablet, mobile HMD) is preferred. If the aim is to create VE with a high degree of immersion, using a HMD is more appropriate. Display devices also are linked to suitable control devices, some controls being designed specifically for a particular display (Oculus Touch and Oculus Rift) or for handheld use (i.e., handheld input devices can be combined successfully with CAVE, devices placed on a table cannot).

4.2 Software

Numerous software tools and technologies are available for creating VEs and can be grouped into several categories: CAD programs, GIS programs, 3D computer graphics, and photogrammetry tools, etc. Game engines and web technologies also frequently are used to create virtual environments.

4.2.1 Desktop software

The desktop software examples listed in Table 1 can be applied to partial tasks during the process of creating a VE. Specific tools and programs often are combined. If the created VE requires detail, photogrammetric software or CAD programs generally are used to process primary data. If the created environment is less detailed, it can be produced using 3D modeling from attribute data in GIS software. Exported visuals in a VE can be modified further using 3D graphics programs. Game engines, which also contain functions for navigation and user interaction in VE, also can be used to enhance 3D models.

 

Table 1. Desktop software suitable for creating virtual environments. Programs that are free and open source are indicated by underlines. Others indicated are proprietary, many of which are available for trial with demo versions or student licenses.
Category Main Purpose(s)  Example-s
CAD Editing 3D geometry Autodesk LandXplorer, AudoCAD, Bentley Microstation, Trimble (formerly Google) Sketchup 
GIS Terrain processing, 3D modeling from vector based on attributes, 3D spatial analysis (volume calculations, 3D overlay algebra, visibility analysis)  ArcGIS Pro, ArcGIS with 3D Analyst extension (ArcScene, ArcGlobe), Intergraph GeoMedia Terrain, MapInfo with Vertical Mapper, Safe Software's FME, Atlas Digital Terrain Model, GRASS, QGIS (since version 3.0), gvSIG
Photogrammetry Stereo-photogrammetry Creating 2.5D terrain models Leica Geosystems, ERDAS Imagine, PCI Geomatica (OrthoEngine), Exelis (formerly ITT Defense & Information Solutions), ENVI
Structure from Motion (SfM) Creating 3D models Agisoft Metashape (formerly Agisoft Photoscan), VisualSfM, MicMac, COLMAP, OpenMVG
3D computer graphics software Setting the textures, materials, depth maps, lighting, shading, and rendering in general Autodesk 3D Studio MAX, Cinema 4D, Rhinoceros 3D, Blender, MeshLab
Specialized software Procedural modeling Combines GIS and 3D computer graphics Esri's City Engine
Simulation software Simulation and advanced analytical modeling FireFLY
Game engines Modifying visualizations (similarly to 3D computer graphics software), defining interaction and navigation Unity 3D, Unreal Engine 4, Cry Engine, Second Life, Microsoft Flight Simulator, id Tech (Quake), OpenSceneGraph

 

4.2.2 Web technologies

Various web technologies (see Web Mapping) are used to visualize smaller or simpler 3D visualizations that cannot be considered directly immersive environments. The VRML (Virtual Reality Modeling Language), which was previously applied to create 3D visualizations for the web, is currently being replaced by standards such as 3D Tiles or glTF. 3D visualizations were rendered in web browsers mostly through plug-ins (e.g., Cortona 3D Viewer). Plug-in was also used to implement the NASA World Wind virtual globe. Today, preference is given to technologies built on the HTML5 and JavaScript library WebGL. These include three.js, X3DOM, XML3D, and SpiderGL. A number of 3D virtual globes is available in addition to general 3D libraries. The best known is Cesium. Herman & Řezník (2015) provide more detail about commonly used web-based technologies for 3D geovisualization. A gradually developing trend is the integration of web technologies and immersive virtual reality. The open specification WebVR was developed for this purpose and requires a compatible web browser and HMD.

 

5. Applications of Virtual and Immersive Environments

Virtual and immersive environments have been applied in many fields, including medicine (Ruthenbeck, Reynolds, 2014), manufacturing (Lawson et al., 2015), education (Šašinka et al., 2019; Figure 6), culture (Debaileux et al., 2018), and sports. The potential for geo-sciences is unlimited (Biljecki et al., 2015). The digital world map provided by Google Earth is a well-known application of VR enhanced with spatial information. The application offers users seamless satellite imagery of the Earth and other planets.

Education in geography can serve as an example. Immersive VEs permit new methods of presenting and explaining different topics and concepts both situated in the landscape and removed from it when visits are not possible (see below).

Contour lines/contour intervals represent the discrete representation of continuous phenomena (see Terrain Representation). It can be difficult for students to understand the concept presented in 2D media. VR or AR systems allow students to observe transformation of 3D terrain into a 2D map, interactively change contour line intervals, and manipulate the terrain itself and observe the change in contours (Figs. 6 and 7; see also video demonstration of collaborative education using VR and AR).

 

avatar and contour lines

Figure 6. A user's avatar and contour line task in a virtual classroom. Source: authors.

 

contour line task sandbox

Figure 7. User performing a contour line task using a VR sandbox. Source: authors.

 

Another application of VR in geo-sciences is in emergency management. VR can be a tool for simulating the effects of small-scale phenomena, such as floods, landslides, etc., or large-scale phenomena, such as building evacuations (even in structures not yet built). Simulations in a real environment are usually expensive or not possible at all. The example in Figure 8 shows a hypothetical building evacuation. The results from these types of simulations can assist researchers (or other professionals) in designing the size and length of corridors, capacity of elevators, and other parameters of planned buildings (e.g., football stadiums, metro/train stations, and other structures).

simulated environment for evacuation task

Figure 8. An example of a simulated environment for an evacuation task (left: first person view; right: perspective view of the experimental environment). Source: authors.

References: 

Biljecki, F., Stoter, J., Ledoux, H., Zlatanova, S., Çöltekin, A. (2015). Applications of 3D City Models: State of the Art Review. ISPRS International Journal of Geo-Information 4(4), 2842-2889. DOI: 10.3390/ijgi4042842.

Brooks, F.P. (1999). What’s Real About Virtual Reality?, Special Report, IEEE Computer Graphics and Applications, pp 16-27.

Buchroithner, M. F., Knust, C. (2013). True-3D in Cartography – Current Hard and Softcopy Developments. In Moore, A., Drecki, I. Geospatial Visualisation, 41-65. Heidelberg: Springer. DOI: 10.1007/978-3-642-12289-7_3.

Çöltekin, A.; Lochhead, I.; Madden, M.; Christophe, S.; Devaux, A.; Pettit, C.; Lock, O.; Shukla, S.; Herman, L.; Stachoň, Z.; Kubíček, P.; Snopková, D.; Bernardes, S.; Hedley, N. (2020): Extended Reality in Spatial Sciences: A Review of Research Challenges and Future Directions. ISPRS Int. J. Geo-Inf. 9, 439.

Cruz-Neira, C., Sandin, D. J., DeFanti, T. A., Kenyon, R. V., Hart, J. C. (1992). The CAVE: Audio Visual Experience Automatic Virtual Environment. Communications of ACM 35(6), pp. 64-72. DOI: 10.1145/129888.129892

Cummings, J. J., Bailenson, J. N. (2016). How Immersive is Enough? A Meta-analysis of the Effect of Immersive Technology on User Presence. Media Psychology. 19(2), 272-309, DOI:10.1080/15213269.2015.1015740

Debailleux, L., Hismans, G., Duroisin, N. (2018). Exploring Cultural Heritage Using Virtual Reality. In Ioannides, M. Digital Cultural Heritage, 289-303. Cham: Springer. DOI: 10.1007/978-3-319-75826-8.

Herman, L., Řezník, T. (2015). 3D Web Visualization of Environmental Information – Integration of Heterogeneous Data Sources when Providing Navigation and Interaction. In Mallet, C., et al. ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XL-3/W3, 479-485. Gottingen: Copernicus GmbH. DOI: 10.5194/isprsarchives-XL-3-W3-479-2015.

LaValle, S. M. (2019). Virtual Reality. Cambridge University Press. Retrieved from: http://vr.cs.uiuc.edu

Lawson, G., Salanitri, D., Waterfield, B. (2015). VR Processes in the Automotive Industry. 17th International Conference, HCI International. DOI: 10.1007/978-3-319-21006-3_21.

Lee, K., (2012). Augmented Reality in Education and Training. TechTrends,56, n. 2, pp.13–21. DOI:10.1007/s11528-012-0559-3.

Lokka, I. E., Çöltekin, A., Wiener, J., Röcke, C. (2018). Virtual Environments as Memory Training Devices in Navigational Tasks for Older Adults. Scientific Reports 8, 10809. DOI: 10.1038/s41598-018-29029-x.

Mazuryk, T., Gervautz, M. (1996). Virtual Reality - History, Applications, Technology and Future. Retrieved from: https://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH/TR-186-2-96-06Paper.pdf

MacEachren, A. M. (1995). How Maps Work. New York: Guildford Press.

MacEachren, A. M., Edsall, R. M., Haug, D., Baxter, R., Otto, G., Masters, R., Fuhrmann, S., Qian, L. (1999). Virtual Environments for Geographic Visualization: Potential and Challenges. In Proceedings of the 1999 Workshop on New Paradigms in Information Visualization and Manipulation in Conjunction with the Eighth ACM International Conference on Information and Knowledge Management, 35-40 DOI: 10.1145/331770.331781

Milgram, P., Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. IEICE Transactions on Information and Systems 77(12), 1321-1329.

Ruthenbeck, G., Reynolds, K. (2014). Virtual Reality for Medical Training: The State of the Art. Journal of Simulation. 9(1), 16-26. DOI: 10.1057/jos.2014.14

Šašinka, Č., Stachoň, Z., Sedlák, M., Chmelík, J., Herman, L., Kubíček, P., Strnadová, A., Doležal, M., Tejkl, H., Urbánek, T., Svatoňová, H., Ugwitz, P., Juřík V. (2019). Collaborative Immersive Virtual Environments for Education in Geography. ISPRS International Journal of Geo-Information 8(1), 1-25. DOI: 10.3390/ijgi8010003.

Sherman, W. R., Craig, A. B. (2002). Understanding Virtual Reality: Interface, Application, and Design. San Francisco: Morgan Kaufmann.

Slocum, T. A., McMaster, R. B., Kessler, F. C., Howard, H. H. (2005). Thematic Cartography and Geographic Visualization. 2nd ed. Upper Saddle River: Pearson Prentice Hall.

Learning Objectives: 
  • Discuss the meanings and relationships of “virtual” and “augmented” environments as it relates to virtual reality.
  • Compare and contrast the relative advantages of different immersive display, processing, and output systems used for cartographic visualization (e.g., CAVEs, HMDs, etc.).
  • Explain how virtual and immersive environments become increasingly more complex as we progress from non-immersive pseudo 3D environments to stereoscopic, real 3D, fully immersive environments.
  • Explain the principles of virtual environments according to MacEachren’s four “I”s: immersion, interactivity, information density, and intelligence of the displayed objects.
  • Create a hypothetical virtual environment by identifying a logical stack of hardware and software.
  • Describe the hypothetical use case application of an IVEs for a given domain (e.g., medicine, manufacturing , education, culture, and sports.
Instructional Assessment Questions: 
  1. What are the differences between virtual and augmented realities?
  2. What are the main components of VE systems?
  3. What are the limitations of virtual environments?
  4. Which devices can be used to display VEs? What affects their deployment?
  5. What is the role of GIS, photogrammetric software, CAD programs, 3D computer graphics software and game engines in creating VE?
  6. What fields can benefit from the usage of VEs?
Additional Resources: 
  1. ISPRS WG IV/9: Geovisualization, Augmented and Virtual Reality
  2. ICA Commission on Location Based Services
  3. ICA Commission on UX: Designing the User Experience
  4. Çöltekin, A., Griffin, A. L., Slingsby, A., Robinson, A. C., Christophe, S., Rautenbach, V., Chen, M., Pettit, C., Klippel, A. (2020). Geospatial Information Visualization and Extended Reality Displays. In Guo, H., Goodchild, M., Annoni, A. Manual of Digital Earth. Singapore: Springer. DOI: 10.1007/978-981-32-9915-3_7
  5. Halik, L. (2018). Challenges in Converting the Polish Topographic Database of Built-Up Areas into 3D Virtual Reality Geovisualization. The Cartographic Journal 55(4), 391-399, DOI: 10.1080/00087041.2018.1541204
  6. Hruby, F., Ressl, R., de la Borbolla del Valle, G. (2019). Geovisualization with Immersive Virtual Environments in Theory and Practice. International Journal of Digital Earth 12(2), 123-136. DOI:10.1080/17538947.2018.1501106
  7. Kubíček, P., Šašinka, Č., Stachoň, Z., Herman, L., Juřík, V., Urbánek, T., Chmelík, J. (2019). Identification of Altitude Profiles in 3D Geovisualizations: The Role of Interaction and Spatial Abilities. International Journal of Digital Earth 12(2), 156-172. DOI: 10.1080/17538947.2017.1382581.
  8. Laksono, D., Aditya, T. (2019). Utilizing A Game Engine for Interactive 3D Topographic Data Visualization. ISPRS International Journal of Geo-Information 8(8), 1-18. DOI: 10.3390/ijgi8080361
  9. Zhao, J., Wallgrün, J. O., LaFemina, P. C., Normandeau, J., Klippel, A. (2019). Harnessing the Power of Immersive Virtual Reality - Visualization and Analysis of 3D Earth Science Data Sets. Geo-spatial Information Science, 22(4), 237-250. DOI: 10.1080/10095020.2019.1621544