All Topics

This knowledge area embodies a variety of data driven analytics, geocomputational methods, simulation and model driven approaches designed to study complex spatial-temporal problems, develop insights into characteristics of geospatial data sets, create and test geospatial process models, and construct knowledge of the behavior of geographically-explicit and dynamic processes and their patterns.

Topics in this Knowledge Area are listed thematically below. Existing topics are in regular font and linked directly to their original entries (published in 2006; these contain only Learning Objectives). Entries that have been updated and expanded are in bold. Forthcoming, future topics are italicized

 

Methodological Context Surface & Field Analyses Space-Time Analysis & Modeling
Geospatial Analysis & Model Building Modeling Surfaces Time Geography
Changing Context of GIScience Gridding, Interpolation, and Contouring Capturing Spatio-Temporal Dynamics in Computational Modeling 
Building Blocks Inverse Distance Weighting GIS-Based Computational Modeling
Overlay & Combination Operations Radial Basis & Spline Functions Computational Movement Analysis
Areal Interpolation Polynomial Functions Volumes and Space-Time Volumes
Aggregation of Spatial Entities Kriging Interpolation  
Classification & Clustering LiDAR Point Cloud Analysis Geocomputational Methods & Models
Boundaries & Zone Membership Intervisibility, Line-of-Sight, and Viewsheds Cellular Automata
Spatial Queries Digital Elevation Models & Terrain Metrics Agent-based Modeling
Buffers TIN-based Models and Terrain Metrics Simulation Modeling
Grid Operations & Map Algebra Watersheds & Drainage Artificial Neural Networks
Data Exploration & Spatial Statistics 3D Parametric Surfaces Genetic Algorithms & Evolutionary Computing 
Spatial Statistics Network & Location Analysis Big Data & Geospatial Analysis
Spatial Sampling for Spatial Analysis Intro to Network & Location Analysis Problems & with Large Spatial Databases
Exploratory Spatial Data Analysis (ESDA) Location & Service Area Problems Pattern Recognition & Matching
Point Pattern Analysis Network Route & Tour Problems Artificial Intelligence Approaches
Kernels & Density Estimation Modelling Accessibility Intro to Spatial Data Mining
Spatial Interaction Location-allocation Modeling Rule Learning for Spatial Data Mining
Cartographic Modeling The Classic Transportation Problem Machine Learning Approaches
Multi-criteria Evaluation   CyberGIS and Cyberinfrastructure
Grid-based Statistics and Metrics   Analysis of Errors & Uncertainty
Landscape Metrics   Error-based Uncertainty
Hot-spot and Cluster Analysis   Conceptual Models of Error & Uncertainty
Global Measures of Spatial Association   Spatial Data Uncertainty
Local Indicators of Spatial Autocorrelation   Problems of Scale & Zoning
Simple Regression & Trend Surface Analysis   Thematic Accuracy & Assessment
Geographically Weighted Regression   Stochastic Simulation & Monte Carlo Methods
Spatial Autoregressive & Bayesian Methods   Mathematical Models of Uncertainty
Spatial Filtering Models   Fuzzy Aggregation Operators

 

A B C D E F G I K L M O P R S T V W
AM-07 - Point Pattern Analysis

Point pattern analysis (PPA) focuses on the analysis, modeling, visualization, and interpretation of point data. With the increasing availability of big geo-data, such as mobile phone records and social media check-ins, more and more individual-level point data are generated daily. PPA provides an effective approach to analyzing the distribution of such data. This entry provides an overview of commonly used methods in PPA, as well as demonstrates the utility of these methods for scientific investigation based on a classic case study: the 1854 cholera outbreaks in London.

AM-62 - Point, Line, and Area Generalization

Generalization is an important and unavoidable part of making maps because geographic features cannot be represented on a map without undergoing transformation. Maps abstract and portray features using vector (i.e. points, lines and polygons) and raster (i.e pixels) spatial primitives which are usually labeled. These spatial primitives are subjected to further generalization when map scale is changed. Generalization is a contradictory process. On one hand, it alters the look and feel of a map to improve overall user experience especially regarding map reading and interpretive analysis. On the other hand, generalization has documented quality implications and can sacrifice feature detail, dimensions, positions or topological relationships. A variety of techniques are used in generalization and these include selection, simplification, displacement, exaggeration and classification. The techniques are automated through computer algorithms such as Douglas-Peucker and Visvalingam-Whyatt in order to enhance their operational efficiency and create consistent generalization results. As maps are now created easily and quickly, and used widely by both experts and non-experts owing to major advances in IT, it is increasingly important for virtually everyone to appreciate the circumstances, techniques and outcomes of generalizing maps. This is critical to promoting better map design and production as well as socially appropriate uses.

AM-27 - Principles of semi-variogram construction
  • Identify and define the parameters of a semi-variogram (range, sill, nugget)
  • Demonstrate how semi-variograms react to spatial nonstationarity
  • Construct a semi-variogram and illustrate with a semi-variogram cloud
  • Describe the relationships between semi-variograms and correlograms, and Moran’s indices of spatial association
AM-87 - Problems of currency, source, and scale
  • Describe the problem of conflation associated with aggregation of data collected at different times, from different sources, and to different scales and accuracy requirements
  • Explain how geostatistical techniques might be used to address such problems
AM-60 - Raster resampling
  • Evaluate methods used by contemporary GIS software to resample raster data on-the-fly during display
  • Select appropriate interpolation techniques to resample particular types of values in raster data (e.g., nominal using nearest neighbor)
  • Resample multiple raster data sets to a single resolution to enable overlay
  • Resample raster data sets (e.g., terrain, satellite imagery) to a resolution appropriate for a map of a particular scale
  • Discuss the consequences of increasing and decreasing resolution
AM-68 - Rule Learning for Spatial Data Mining

Recent research has identified rule learning as a promising technique for geographic pattern mining and knowledge discovery to make sense of the big spatial data avalanche (Koperski & Han, 1995; Shekhar et al., 2003). Rules conveying associative implications regarding locations, as well as semantic and spatial characteristics of analyzed spatial features, are especially of interest. This overview considers fundamentals and recent advancements in two approaches applied on spatial data: spatial association rule learning and co-location rule learning.

AM-28 - Semi-variogram modeling
  • List the possible sources of error in a selected and fitted model of an experimental semi-variogram
  • Describe the conditions under which each of the commonly used semi-variograms models would be most appropriate
  • Explain the necessity of defining a semi-variogram model for geographic data
  • Apply the method of weighted least squares and maximum likelihood to fit semi-variogram models to datasets
  • Describe some commonly used semi-variogram models
AM-84 - Simulation Modeling

Advances in computational capacity have enabled dynamic simulation modeling to become increasingly widespread in scientific research. As opposed to conceptual or physical models, simulation models enable numerical experimentation with alternative parametric assumptions for a given model design. Numerous design choices are made in model development that involve continuous or discrete representations of time and space. Simulation modeling approaches include system dynamics, discrete event simulation, agent-based modeling, and multi-method modeling. The model development process involves a shift from qualitative design to quantitative analysis upon implementation of a model in a computer program or software platform. Upon implementation, model analysis is performed through rigorous experimentation to test how model structure produces simulated patterns of behavior over time and space. Validation of a model through correspondence of simulated results with observed behavior facilitates its use as an analytical tool for evaluating strategies and policies that would alter system behavior.

AM-32 - Spatial autoregressive models
  • Explain Anselin’s typology of spatial autoregressive models
  • Demonstrate how the parameters of spatial auto-regressive models can be estimated using univariate and bivariate optimization algorithms for maximizing the likelihood function
  • Justify the choice of a particular spatial autoregressive model for a given application
  • Implement a maximum likelihood estimation procedure for determining key spatial econometric parameters
  • Apply spatial statistic software (e.g., GEODA) to create and estimate an autoregressive model
  • Conduct a spatial econometric analysis to test for spatial dependence in the residuals from least-squares models and spatial autoregressive models
AM-107 - Spatial Data Uncertainty

Although spatial data users may not be aware of the inherent uncertainty in all the datasets they use, it is critical to evaluate data quality in order to understand the validity and limitations of any conclusions based on spatial data. Spatial data uncertainty is inevitable as all representations of the real world are imperfect. This topic presents the importance of understanding spatial data uncertainty and discusses major methods and models to communicate, represent, and quantify positional and attribute uncertainty in spatial data, including both analytical and simulation approaches. Geo-semantic uncertainty that involves vague geographic concepts and classes is also addressed from the perspectives of fuzzy-set approaches and cognitive experiments. Potential methods that can be implemented to assess the quality of large volumes of crowd-sourced geographic data are also discussed. Finally, this topic ends with future directions to further research on spatial data quality and uncertainty.

Pages