Search Page

Showing 1 - 6 of 6
DM-85 - Point, Line, and Area Generalization

Generalization is an important and unavoidable part of making maps because geographic features cannot be represented on a map without undergoing transformation. Maps abstract and portray features using vector (i.e. points, lines and polygons) and raster (i.e pixels) spatial primitives which are usually labeled. These spatial primitives are subjected to further generalization when map scale is changed. Generalization is a contradictory process. On one hand, it alters the look and feel of a map to improve overall user experience especially regarding map reading and interpretive analysis. On the other hand, generalization has documented quality implications and can sacrifice feature detail, dimensions, positions or topological relationships. A variety of techniques are used in generalization and these include selection, simplification, displacement, exaggeration and classification. The techniques are automated through computer algorithms such as Douglas-Peucker and Visvalingam-Whyatt in order to enhance their operational efficiency and create consistent generalization results. As maps are now created easily and quickly, and used widely by both experts and non-experts owing to major advances in IT, it is increasingly important for virtually everyone to appreciate the circumstances, techniques and outcomes of generalizing maps. This is critical to promoting better map design and production as well as socially appropriate uses.

DM-90 - Hydrographic Geospatial Data Standards

Coastal nations, through their dedicated Hydrographic Offices (HOs), have the obligation to provide nautical charts for the waters of national jurisdiction in support of safe maritime navigation. Accurate and reliable charts are essential to seafarers whether for commerce, defense, fishing, or recreation. Since navigation can be an international activity, mariners often use charts published from different national HOs. Standardization of data collection and processing, chart feature generalization methods, text, symbology, and output validation becomes essential in providing mariners with consistent and uniform products regardless of the region or the producing nation. Besides navigation, nautical charts contain information about the seabed and the coastal environment useful in other domains such as dredging, oceanography, geology, coastal modelling, defense, and coastal zone management. The standardization of hydrographic and nautical charting activities is achieved through various publications issued by the International Hydrographic Organization (IHO). This chapter discusses the purpose and importance of nautical charts, the establishment and role of the IHO in coordinating HOs globally, the existing hydrographic geospatial data standards, as well as those under development based on the new S-100 Universal Hydrographic Data Model.

DM-20 - Entity-based Models

As we translate real world phenomena into data structures that we can store in a computer, we must determine the most appropriate spatial representation and how it relates to the characteristics of such a phenomenon. All spatial representations are derivatives of graph theory and should therefore be described in such terms. This then helps to understand the principles of low-level GIS operations. A constraint-driven approach allows the reader to evaluate implementations of the geo-relational principle in terms of the hierarchical level of mathematical space adopted.

DC-29 - Volunteered Geographic Information

Volunteered geographic information (VGI) refers to geo-referenced data created by citizen volunteers. VGI has proliferated in recent years due to the advancement of technologies that enable the public to contribute geographic data. VGI is not only an innovative mechanism for geographic data production and sharing, but also may greatly influence GIScience and geography and its relationship to society. Despite the advantages of VGI, VGI data quality is under constant scrutiny as quality assessment is the basis for users to evaluate its fitness for using it in applications. Several general approaches have been proposed to assure VGI data quality but only a few methods have been developed to tackle VGI biases. Analytical methods that can accommodate the imperfect representativeness and biases in VGI are much needed for inferential use where the underlying phenomena of interest are inferred from a sample of VGI observations. VGI use for inference and modeling adds much value to VGI. Therefore, addressing the issue of representativeness and VGI biases is important to fulfill VGI’s potential. Privacy and security are also important issues. Although VGI has been used in many domains, more research is desirable to address the fundamental intellectual and scholarly needs that persist in the field.

DC-25 - Changes in Geospatial Data Capture Over Time: Part 1, Technological Developments

Geographic Information Systems (GIS) are fueled by geospatial data.  This comprehensive article reviews the evolution of procedures and technologies used to create the data that fostered the explosion of GIS applications. It discusses the need to geographically reference different types of information to establish an integrated computing environment that can address a wide range of questions. This includes the conversion of existing maps and aerial photos into georeferenced digital data.  It covers the advancements in manual digitizing procedures and direct digital data capture. This includes the evolution of software tools used to build accurate data bases. It also discusses the role of satellite based multispectral scanners for Earth observation and how LiDAR has changed the way that we measure and represent the terrain and structures. Other sections deal with building GIS data directly from street addresses and the construction of parcels to support land record systems. It highlights the way Global Positioning Systems (GPS) technology coupled with wireless networks and cloud-based applications have spatially empowered millions of users. This combination of technology has dramatically affected the way individuals search and navigate in their daily lives while enabling citizen scientists to be active participants in the capture of spatial data. For further information on changes to data capture, see Part 2: Implications and Case Studies. 

DM-70 - Problems of Large Spatial Databases

Large spatial databases often labeled as geospatial big data exceed the capacity of commonly used computing systems as a result of data volume, variety, velocity, and veracity. Additional problems also labeled with V’s are cited, but the four primary ones are the most problematic and focus of this chapter (Li et al., 2016, Panimalar et al., 2017).  Sources include satellites, aircraft and drone platforms, vehicles, geosocial networking services, mobile devices, and cameras. The problems in processing these data to extract useful information include query, analysis, and visualization. Data mining techniques and machine learning algorithms, such as deep convolutional neural networks, often are used with geospatial big data. The obvious problem is handling the large data volumes, particularly for input and output operations, requiring parallel read and write of the data, as well as high speed computers, disk services, and network transfer speeds. Additional problems of large spatial databases include the variety and heterogeneity of data requiring advanced algorithms to handle different data types and characteristics, and integration with other data. The velocity at which the data are acquired is a challenge, especially using today’s advanced sensors and the Internet of Things that includes millions of devices creating data on short temporal scales of micro seconds to minutes. Finally, the veracity, or truthfulness of large spatial databases is difficult to establish and validate, particularly for all data elements in the database.