Search Page

Showing 31 - 40 of 83
CV-18 - Representing Uncertainty

Using geospatial data involves numerous uncertainties stemming from various sources such as inaccurate or erroneous measurements, inherent ambiguity of the described phenomena, or subjectivity of human interpretation. If the uncertain nature of the data is not represented, ill-informed interpretations and decisions can be the consequence. Accordingly, there has been significant research activity describing and visualizing uncertainty in data rather than ignoring it. Multiple typologies have been proposed to identify and quantify relevant types of uncertainty and a multitude of techniques to visualize uncertainty have been developed. However, the use of such techniques in practice is still rare because standardized methods and guidelines are few and largely untested. This contribution provides an introduction to the conceptualization and representation of uncertainty in geospatial data, focusing on strategies for the selection of suitable representation and visualization techniques.

CV-16 - Virtual and Immersive Environments

A virtual environment (VE) is a 3D computer-based simulation of a real or imagined environment in which users can navigate and interactive with virtual objects. VEs have found popular use in communicating geographic information for a variety of domain applications. This entry begins with a brief history of virtual and immersive environments and an introduction to a common framework used to describe characteristics of VEs. Four design considerations for VEs then are reviewed: cognitive, methodological, social, and technological. The cognitive dimension involves generating a strong sense of presence for users in a VE, enabling users to perceive and study represented data in both virtual and real environments. The methodological dimension covers methods in collecting, processing, and visualizing data for VEs. The technological dimension surveys different VE hardware devices (input, computing, and output devices) and software tools (desktop and web technologies). Finally, the social dimension captures existing use cases for VEs in geo-related fields, such as geography education, spatial decision support, and crisis management.

AM-40 - Areal Interpolation

Areal interpolation is the process of transforming spatial data from source zones with known values or attributes to target zones with unknown attributes. It generates estimates of source zone attributes over target zone areas. It aligns areal spatial data attributes over a single spatial framework (target zones) to overcome differences in areal reporting units due to historical boundary changes of reporting areas, integrating data from domains with different reporting conventions or in situations when spatially detailed information is not available. Fundamentally, it requires assumptions about how the target zone attribute relates to the source zones. Areal interpolation approaches can be grouped into two broad categories: methods that link target and source zones by their spatial properties (area to point, pycnophylactic and areal weighed interpolation) and methods that use ancillary or auxiliary information to control, inform, guide, and constrain the interpolation process (dasymetric, statistical, streetweighted and point-based interpolation). Additionally, there are new opportunities to use novel data sources to inform areal interpolation arising from the many new forms of spatial data supported by ubiquitous web- and GPS-enabled technologies including social media, PoI check-ins, spatial data portals (e.g for crime, house sales, microblogging sites) and collaborative mapping activities (e.g. OpenStreetMap).

CV-36 - Geovisual Analytics

Geovisual analytics refers to the science of analytical reasoning with spatial information as facilitated by interactive visual interfaces. It is distinguished by its focus on novel approaches to analysis rather than novel approaches to visualization or computational methods alone. As a result, geovisual analytics is usually grounded in real-world problem solving contexts. Research in geovisual analytics may focus on the development of new computational approaches to identify or predict patterns, new visual interfaces to geographic data, or new insights into the cognitive and perceptual processes that users apply to solve complex analytical problems. Systems for geovisual analytics typically feature a high-degree of user-driven interactivity and multiple visual representation types for spatial data. Geovisual analytics tools have been developed for a variety of problem scenarios, such as crisis management and disease epidemiology. Looking ahead, the emergence of new spatial data sources and display formats is expected to spur an expanding set of research and application needs for the foreseeable future. 

CV-04 - Scale and Generalization

Scale and generalization are two fundamental, related concepts in geospatial data. Scale has multiple meanings depending on context, both within geographic information science and in other disciplines. Typically it refers to relative proportions between objects in the real world and their representations. Generalization is the act of modifying detail, usually reducing it, in geospatial data. It is often driven by a need to represent data at coarsened resolution, being typically a consequence of reducing representation scale. Multiple computations and graphical modication processes can be used to achieve generalization, each introducing increased abstraction to the data, its symbolization, or both.

AM-09 - Classification and Clustering

Classification and clustering are often confused with each other, or used interchangeably. Clustering and classification are distinguished by whether the number and type of classes are known beforehand (classification), or if they are learned from the data (clustering). The overarching goal of classification and clustering is to place observations into groups that share similar characteristics while maximizing the separation of the groups that are dissimilar to each other. Clusters are found in environmental and social applications, and classification is a common way of organizing information. Both are used in many areas of GIS including spatial cluster detection, remote sensing classification, cartography, and spatial analysis. Cartographic classification methods present a simplified way to examine some classification and clustering methods, and these will be explored in more depth with example applications.

AM-21 - The Evolution of Geospatial Reasoning, Analytics, and Modeling

The field of geospatial analytics and modeling has a long history coinciding with the physical and cultural evolution of humans. This history is analyzed relative to the four scientific paradigms: (1) empirical analysis through description, (2) theoretical explorations using models and generalizations, (3) simulating complex phenomena and (4) data exploration. Correlations among developments in general science and those of the geospatial sciences are explored. Trends identify areas ripe for growth and improvement in the fourth and current paradigm that has been spawned by the big data explosion, such as exposing the ‘black box’ of GeoAI training and generating big geospatial training datasets. Future research should focus on integrating both theory- and data-driven knowledge discovery.

AM-106 - Error-based Uncertainty

The largest contributing factor to spatial data uncertainty is error. Error is defined as the departure of a measure from its true value. Uncertainty results from: (1) a lack of knowledge of the extent and of the expression of errors and  (2) their propagation through analyses. Understanding error and its sources is key to addressing error-based uncertainty in geospatial practice. This entry presents a sample of issues related to error and error based uncertainty in spatial data. These consist of (1) types of error in spatial data, (2) the special case of scale and its relationship to error and (3) approaches to quantifying error in spatial data.

CV-19 - Big Data Visualization

As new information and communication technologies have altered so many aspects of our daily lives over the past decades, they have simultaneously stimulated a shift in the types of data that we collect, produce, and analyze. Together, this changing data landscape is often referred to as "big data." Big data is distinguished from "small data" not only by its high volume but also by the velocity, variety, exhaustivity, resolution, relationality, and flexibility of the datasets. This entry discusses the visualization of big spatial datasets. As many such datasets contain geographic attributes or are situated and produced within geographic space, cartography takes on a pivotal role in big data visualization. Visualization of big data is frequently and effectively used to communicate and present information, but it is in making sense of big data – generating new insights and knowledge – that visualization is becoming an indispensable tool, making cartography vital to understanding geographic big data. Although visualization of big data presents several challenges, human experts can use visualization in general, and cartography in particular, aided by interfaces and software designed for this purpose, to effectively explore and analyze big data.

AM-84 - Simulation Modeling

Advances in computational capacity have enabled dynamic simulation modeling to become increasingly widespread in scientific research. As opposed to conceptual or physical models, simulation models enable numerical experimentation with alternative parametric assumptions for a given model design. Numerous design choices are made in model development that involve continuous or discrete representations of time and space. Simulation modeling approaches include system dynamics, discrete event simulation, agent-based modeling, and multi-method modeling. The model development process involves a shift from qualitative design to quantitative analysis upon implementation of a model in a computer program or software platform. Upon implementation, model analysis is performed through rigorous experimentation to test how model structure produces simulated patterns of behavior over time and space. Validation of a model through correspondence of simulated results with observed behavior facilitates its use as an analytical tool for evaluating strategies and policies that would alter system behavior.

Pages