Search Page

Showing 161 - 170 of 276
PD-14 - GIS and Parallel Programming

Programming is a sought after skill in GIS, but traditional programming (also called serial programming) only uses one processing core. Modern desktop computers, laptops, and even cellphones now have multiple processing cores, which can be used simultaneously to increase processing capabilities for a range of GIS applications. Parallel programming is a type of programming that involves using multiple processing cores simultaneously to solve a problem, which enables GIS applications to leverage more of the processing power on modern computing architectures ranging from desktop computers to supercomputers. Advanced parallel programming can leverage hundreds and thousands of cores on high-performance computing resources to process big spatial datasets or run complex spatial models.

Parallel programming is both a science and an art. While there are methods and principles that apply to parallel programming--when, how, and why certain methods are applied over others in a specific GIS application remains more of an art than a science. The following sections introduce the concept of parallel programming and discuss how to parallelize a spatial problem and measure parallel performance.

DM-79 - U.S. National Spatial Data Infrastructure

Spatial data infrastructures may be thought of as socio-technical frameworks for coordinating the development, management, sharing and use of geospatial data across multiple organizational jurisdictions and varying geographic extents. The United States was an early adopter of the SDI concept and the U.S. National Spatial Data Infrastructure (NSDI) is an example of a country-wide SDI implementation facilitated by coordination at the federal-government level. At the time of its establishment in the early 1990s, a unique characteristic of the NSDI was a mandate for federal agencies to establish partnerships with state- and local-level government. This entry summarizes the origins of the NSDI’s establishment, its original core components and how they’ve evolved over the last 25 years, the role of the Federal Geographic Data Committee (FGDC), and the anticipated impact of passage of the Geospatial Data Act of 2018. For broader technical information about SDIs, readers are referred to GIST BoK Entry DM-60: Spatial Data Infrastructures (Hu and Li 2017). For additional details on the history of the NSDI, readers are referred to Rhind (1999). For the latest information on recent and emerging NSDI initiatives, please visit the FGDC web site (  

AM-93 - Artificial Intelligence Approaches

Artificial Intelligence (AI) has received tremendous attention from academia, industry, and the general public in recent years. The integration of geography and AI, or GeoAI, provides novel approaches for addressing a variety of problems in the natural environment and our human society. This entry briefly reviews the recent development of AI with a focus on machine learning and deep learning approaches. We discuss the integration of AI with geography and particularly geographic information science, and present a number of GeoAI applications and possible future directions.

AM-107 - Spatial Data Uncertainty

Although spatial data users may not be aware of the inherent uncertainty in all the datasets they use, it is critical to evaluate data quality in order to understand the validity and limitations of any conclusions based on spatial data. Spatial data uncertainty is inevitable as all representations of the real world are imperfect. This topic presents the importance of understanding spatial data uncertainty and discusses major methods and models to communicate, represent, and quantify positional and attribute uncertainty in spatial data, including both analytical and simulation approaches. Geo-semantic uncertainty that involves vague geographic concepts and classes is also addressed from the perspectives of fuzzy-set approaches and cognitive experiments. Potential methods that can be implemented to assess the quality of large volumes of crowd-sourced geographic data are also discussed. Finally, this topic ends with future directions to further research on spatial data quality and uncertainty.

DM-71 - Geospatial Data Conflation

Spatial data conflation is the process of combining overlapping spatial datasets to produce a better dataset with higher accuracy or more information. Conflation is needed in many fields, ranging from transportation planning to the analysis of historical datasets, which require the use of multiple data sources. Geospatial data conflation becomes increasingly important with the advancement of GIS and the emergence of new sources of spatial data such as Volunteered Geographic Information.

Conceptually, conflation is a two-step process involving identifying counterpart features that correspond to the same object in reality, and merging the geometry and attributes of counterpart features. In practice, conflation can be performed either manually or with the aid of GIS with varying degrees of automation. Manual conflation is labor-intensive, time consuming and expensive. It is often adopted in practice, nonetheless, due to the lack of reliable automatic conflation methods.

A main challenge of automatic conflation lies in the automatic matching of corresponding features, due to the varying quality and different representations of map data. Many (semi-)automatic feature methods exist. They typically involve measuring the distance between each feature pair and trying to match feature pairs with smaller dissimilarity using a specially designed algorithm or model. Fully automated conflation is still an active research field.

PD-20 - Real-time GIS Programming and Geocomputation

Streaming data generated continuously from sensor networks, mobile devices, social media platforms and other edge devices have posed significant challenges to existing computing platforms for achieving both high throughput and low latency data processing in addition to scalable computing. This entry introduces a real-time computing and programming platform for time-critical GIS (Geographic Information System) applications. In this platform, advanced streaming data processing software, such as Apache Kafka and Spark Streaming, are integrated to enable data analytics in real-time. This computing platform can also be extended to integrate GeoAI (Geospatial Artificial Intelligence) based machine learning models to leverage both historical and streaming data to achieve real-time prediction and intelligent geospatial analytics. Two real-time geospatial applications in terms of flood simulation and climate data visualization are introduced to demonstrate how real-time programming and computing can help tackle real-world problems with important societal impacts.

KE-25 - GIS&T Education and Training

GIS education and training have their roots both in formal educational settings and in professional development.  Methods and approaches for teaching and learning about and with geospatial technologies have evolved in tight connection with the advances in the internet and personal computers.  The adoption and integration of GIS and related geospatial technologies into dozens of academic disciplines has led to a high demand for instruction that is targeted and timely, a combination that is challenging to meet consistently with diverse audiences and in diverse settings. Academic degrees, concentrations, minors, certificates, and numerous other programs abound within formal and informal education.

DA-04 - GIS&T and Civil Engineering

Civil Engineering, which includes sub-disciplines such as environmental, geotechnical, structural, and water resource engineering, is increasingly dependent on the GIS&T for the planning, design, operation and management of civil engineering infrastructure systems.  Typical tasks include the management of spatially referenced data sets, analytic modeling for making design decisions and estimating likely system behavior and impacts, and the visualization of systems for the decision-making process and garnering stakeholder support.

KE-24 - GIS&T Positions and Qualifications

Workforce needs tied to geospatial data continue to evolve.  Along with expansion in the absolute number of geospatial workers employed in the public and private sectors is greater diversity in the fields where their work has become important.  Together, these trends generate demand for new types of educational and professional development programs and opportunities. Colleges and universities have responded by offering structured academic programs ranging from minors and academic certificates to full GIS&T degrees.  Recent efforts also target experienced GIS&T professionals through technical certifications involving software applications and more comprehensive professional certifications designed to recognize knowledge, experience, and expertise.

AM-97 - An Introduction to Spatial Data Mining

The goal of spatial data mining is to discover potentially useful, interesting, and non-trivial patterns from spatial data-sets (e.g., GPS trajectory of smartphones). Spatial data mining is societally important having applications in public health, public safety, climate science, etc. For example, in epidemiology, spatial data mining helps to nd areas with a high concentration of disease incidents to manage disease outbreaks. Computational methods are needed to discover spatial patterns since the volume and velocity of spatial data exceed the ability of human experts to analyze it. Spatial data has unique characteristics like spatial autocorrelation and spatial heterogeneity which violate the i.i.d (Independent and Identically Distributed) assumption of traditional statistic and data mining methods. Therefore, using traditional methods may miss patterns or may yield spurious patterns, which are costly in societal applications. Further, there are additional challenges such as MAUP (Modiable Areal Unit Problem) as illustrated by a recent court case debating gerrymandering in elections. In this article, we discuss tools and computational methods of spatial data mining, focusing on the primary spatial pattern families: hotspot detection, collocation detection, spatial prediction, and spatial outlier detection. Hotspot detection methods use domain information to accurately model more active and high-density areas. Collocation detection methods find objects whose instances are in proximity to each other in a location. Spatial prediction approaches explicitly model the neighborhood relationship of locations to predict target variables from input features. Finally, spatial outlier detection methods find data that differ from their neighbors. Lastly, we describe future research and trends in spatial data mining.