Search Page

Showing 1 - 10 of 39
DM-85 - Point, Line, and Area Generalization

Generalization is an important and unavoidable part of making maps because geographic features cannot be represented on a map without undergoing transformation. Maps abstract and portray features using vector (i.e. points, lines and polygons) and raster (i.e pixels) spatial primitives which are usually labeled. These spatial primitives are subjected to further generalization when map scale is changed. Generalization is a contradictory process. On one hand, it alters the look and feel of a map to improve overall user experience especially regarding map reading and interpretive analysis. On the other hand, generalization has documented quality implications and can sacrifice feature detail, dimensions, positions or topological relationships. A variety of techniques are used in generalization and these include selection, simplification, displacement, exaggeration and classification. The techniques are automated through computer algorithms such as Douglas-Peucker and Visvalingam-Whyatt in order to enhance their operational efficiency and create consistent generalization results. As maps are now created easily and quickly, and used widely by both experts and non-experts owing to major advances in IT, it is increasingly important for virtually everyone to appreciate the circumstances, techniques and outcomes of generalizing maps. This is critical to promoting better map design and production as well as socially appropriate uses.

DM-88 - Coordinate Transformations

Coordinate transformations are needed to align multiple GIS datasets to one coordinate system when they use multiple coordinate systems. To transform coordinates, the properties of the source and target coordinate systems such as datums, projection methods, and their measurement origins and units should be identified carefully. Implemented in most GIS software and GIS data viewers, the on-the-fly projection technology projects GIS datasets automatically without the need for manual coordinate transformations by users. The coordinate transformation mechanisms for vector and raster datasets are different because the raster datasets require pixel value resampling during coordinate transformations. As a case study, eight GIS datasets were downloaded from multiple websites and were reprojected to a coordinate system in QGIS.

DM-60 - Spatial Data Infrastructures

Spatial data infrastructure (SDI) is the infrastructure that facilitates the discovery, access, management, distribution, reuse, and preservation of digital geospatial resources. These resources may include maps, data, geospatial services, and tools. As cyberinfrastructures, SDIs are similar to other infrastructures, such as water supplies and transportation networks, since they play fundamental roles in many aspects of the society. These roles have become even more significant in today’s big data age, when a large volume of geospatial data and Web services are available. From a technological perspective, SDIs mainly consist of data, hardware, and software. However, a truly functional SDI also needs the efforts of people, supports from organizations, government policies, data and software standards, and many others. In this chapter, we will present the concepts and values of SDIs, as well as a brief history of SDI development in the U.S. We will also discuss the components of a typical SDI, and will specifically focus on three key components: geoportals, metadata, and search functions. Examples of the existing SDI implementations will also be discussed.  

DM-34 - Conceptual Data Models

Within an initial phase of database design, a conceptual data model is created as a technology-independent specification of the data to be stored within a database. This specification often times takes the form of a formalized diagram.  The process of conceptual data modeling is meant to foster shared understanding among data modelers and stakeholders when creating the specification.  As such, a conceptual data model should be easily readable by people with little or no technical-computer-based expertise because a comprehensive view of information is more important than a detailed view. In a conceptual data model, entity classes are categories of things (person, place, thing, etc.) that have attributes for describing the characteristics of the things.  Relationships can exist between the entity classes.  Entity-relationship diagrams have been and are likely to continue to be a popular way of characterizing entity classes, attributes and relationships.  Various notations for diagrams have been used over the years. The main intent about a conceptual data model and its corresponding entity-relationship diagram is that they should highlight the content and meaning of data within stakeholder information contexts, while postponing the specification of logical structure to the second phase of database design called logical data modeling. 

DM-52 - Horizontal (Geometric) Datums

A horizontal (geometric) datum provides accurate coordinates (e.g., latitude and longitude) for points on Earth’s surface. Historically, surveyors developed a datum using optically sighted instruments to manually place intervisible survey marks in the ground. This survey work incorporated geometric principles of baselines, distances, and azimuths through the process of triangulation to attach a coordinate value to each survey mark. Triangulation produced a geodetic network of interconnected survey marks that realized the datum (i.e., connecting the geometry of the network to Earth’s physical surface). For local surveys, these datums provided reasonable positional accuracies on the order of meters. Importantly, once placed in the ground, these survey marks were passive; a new survey was needed to determine any positional changes (e.g., due to plate motion) and to update the attached coordinate values. Starting in the 1950s, due to the implementation of active control, space-based satellite geodesy changed how geodetic networks were realized. Here, "active" implies that a survey mark’s coordinates are updated in near real-time through, for example, artificial satellites such as GNSS. Increasingly, GNSS and satellite geodesy is paving the way for a modernized geometric datum that is global in scope and capable of providing positional accuracies at the millimeter level.

DM-01 - Spatial Database Management Systems

A spatial database management system (SDBMS) is an extension, some might say specialization, of a conventional database management system (DBMS).  Every DBMS (hence SDBMS) uses a data model specification as a formalism for software design, and establishing rigor in data management.  Three components compose a data model, 1) constructs developed using data types which form data structures that describe data, 2) operations that process data structures that manipulate data, and 3) rules that establish the veracity of the structures and/or operations for validating data.  Basic data types such as integers and/or real numbers are extended into spatial data types such as points, polylines and polygons in spatial data structures.  Operations constitute capabilities that manipulate the data structures, and as such when sequenced into operational workflows in specific ways generate information from data; one might say that new relationships constitute the information from data.  Different data model designs result in different combinations of structures, operations, and rules, which combine into various SDBMS products.  The products differ based upon the underlying data model, and these data models enable and constrain the ability to store and manipulate data. Different SDBMS implementations support configurations for different user environments, including single-user and multi-user environments.  

DM-70 - Problems of Large Spatial Databases

Large spatial databases often labeled as geospatial big data exceed the capacity of commonly used computing systems as a result of data volume, variety, velocity, and veracity. Additional problems also labeled with V’s are cited, but the four primary ones are the most problematic and focus of this chapter (Li et al., 2016, Panimalar et al., 2017).  Sources include satellites, aircraft and drone platforms, vehicles, geosocial networking services, mobile devices, and cameras. The problems in processing these data to extract useful information include query, analysis, and visualization. Data mining techniques and machine learning algorithms, such as deep convolutional neural networks, often are used with geospatial big data. The obvious problem is handling the large data volumes, particularly for input and output operations, requiring parallel read and write of the data, as well as high speed computers, disk services, and network transfer speeds. Additional problems of large spatial databases include the variety and heterogeneity of data requiring advanced algorithms to handle different data types and characteristics, and integration with other data. The velocity at which the data are acquired is a challenge, especially using today’s advanced sensors and the Internet of Things that includes millions of devices creating data on short temporal scales of micro seconds to minutes. Finally, the veracity, or truthfulness of large spatial databases is difficult to establish and validate, particularly for all data elements in the database.

DM-51 - Vertical (Geopotential) Datums

The elevation of a point requires a reference surface defining zero elevation. In geodesy, this zero-reference surface has historically been mean sea level (MSL) – a vertical datum. However, the geoid, which is a particular equipotential surface of Earth’s gravity field that would coincide with mean sea level were mean sea level altogether unperturbed and placid, is the ideal datum for physical heights, meaning height associated with the flow of water, like elevations. Tidal, gravimetric, and ellipsoidal are common vertical datums that use different approaches to define the reference surface. Tidal datums average water heights over a period of approximately 19 years, gravimetric datums record gravity across Earth’s surface, and ellipsoidal datums use specific reference ellipsoids to report ellipsoid heights. Increasingly, gravity measurements, positional data from GNSS (Global Navigation Satellite System), and other sophisticated measurement technologies GRACE-FO (Gravity Recovery and Climate Experiment – Follow On) are sourced to accurately model the geoid and its geopotential surface advancing the idea of a geopotential datum. Stemming from these advancements, a new geopotential datum for the United States will be developed: North American-Pacific Geopotential Datum 2022 (NAPGD2022).

DM-35 - Logical Data Models

A logical data model is created for the second of three levels of abstraction, conceptual, logical, and physical. A logical data model expresses the meaning context of a conceptual data model, and adds to that detail about data (base) structures, e.g. using topologically-organized records, relational tables, object-oriented classes, or extensible markup language (XML) construct  tags. However, the logical data model formed is independent of a particular database management software product. Nonetheless such a model is often constrained by a class of software language techniques for representation, making implementation with a physical data model easier. Complex entity types of the conceptual data model must be translated into sub-type/super-type hierarchies to clarify data contexts for the entity type, while avoiding duplication of concepts and data. Entities and records should have internal identifiers. Relationships can be used to express the involvement of entity types with activities or associations. A logical schema is formed from the above data organization. A schema diagram depicts the entity, attribute and relationship detail for each application. The resulting logical data models can be synthesized using schema integration to support multi-user database environments, e.g., data warehouses for strategic applications and/or federated databases for tactical/operational business applications.

DM-80 - Ontology for Geospatial Semantic Interoperability

It is difficult to share and reuse geospatial data and retrieve geospatial information because of geospatial data heterogeneity problems. Lack of semantic interoperability is one of the major problems facing GIS (Geographic Information Science/System) systems and applications today. To solve geospatial data heterogeneity problems and support geospatial information retrieval and semantic interoperability over the Web, the use of an ontology is proposed because it is a formal explicit description of concepts or meanings of words in a well-defined and unambiguous manner. Geospatial ontologies represent geospatial concepts and properties for use over the Web. OWL (Ontology Web Language) is an emerging language for defining and instantiating ontologies. OWL builds on RDF (Resource Description Framework) but adds more vocabulary for describing properties and classes. The downside of representing structured geospatial data in OWL and RDF languages is that it can result in inefficient data access. SPARQL (Simple Protocol and RDF Query Language) is recommended for general RDF query while the GeoSPARQL (Geographic Simple Protocol and RDF Query Language) protocol is proposed as an extension of SPARQL for querying geospatial data. However, the runtime cost of GeoSPARQL queries can be high due to the fine-grained nature of RDF data models. There are several challenges to using ontologies for geospatial semantic interoperability but these can be overcome through collaboration.

Pages