FC-31 - Academic Developments of GIS&T in English-speaking Countries: a Partial History

The constellation of science and technology that is now considered a unit (Geographic Information Science and Technology – GIS&T) has emerged from many source disciplines through many divergent and convergent pasts in different times and places. This narrative limits itself to the perspective of the English-speaking community, leaving other regions for a separate chapter As in the case of many technical developments in the second half of the twentieth century, academic institutions played a key (though far from exclusive) role in innovation and risk-taking. In a number of locations, academic innovators tried out new technology for handling geographic information, beginning as early as the 1960s. Three institutions (University of Washington, Laboratory for Computer Graphics – Harvard University, and Experimental Cartography Unit – Royal College of Art (UK)) deserve particular treatment as examples of the early innovation process. Their innovations may look crude by current standards, but they laid some groundwork for later developments. Academic institutions played a key role in innovation over the past decades, but the positioning of that role has shifted as first government, then commercial sectors have taken the lead in certain aspects of GIS&T. Current pressures on the academic sector may act to reduce this role.

Author and Citation Info: 

Chrisman, N. R. (2020). Academic Developments of GIS&T in English-speaking Countries: a Partial History. The Geographic Information Science & Technology Body of Knowledge (1st Quarter 2020 Edition), John P. Wilson (ed.). DOI: 10.22224/gistbok/2020.1.8..

This entry was published on February 25, 2020. 

This Topic is also available in the following editions:  

DiBiase, D., DeMers, M., Johnson, A., Kemp, K., Luck, A. T., Plewe, B., and Wentz, E. (2006). Academic origins. The Geographic Information Science & Technology Body of Knowledge. Washington, DC: Association of American Geographers. (2nd Quarter 2016, first digital).

Topic Description: 
  1. Begin with the Beginning?
  2. Academic Centers of Innovation
  3. Parallel Worlds
  4. Changing Roles


1. Begin with the Beginning?

Roger Tomlinson is credited with creating the first geographic information system in the 1960s. Indeed Tomlinson’s paper presented in Melbourne, Australia (1968) sets out an ambitious agenda for a Canada Geographic Information System (CGIS), spanning the continent. But the particular development did not appear from nowhere. Tomlinson’s work in the 1960s connects to a network of innovators spread across a number of sectors. These developments include connections around the World, but this chapter will focus on the interlocking developments spanning Canada, Australia, UK and USA. These countries effectively formed one community during an important part of this development.

In 1963 (before the CGIS venture took full shape), Tomlinson attended a workshop in Chicago convened by Edgar Horwood, a professor of planning and civil engineering at University of Washington (Chrisman, 2005). Other attendees included Duane Marble, William Garrison, and Howard Fisher. Marble had completed a PhD in geography at University of Washington in 1959, advised by Garrison. Garrison directed a transportation research center at Northwestern University in Chicago. Marble joined that center. Fisher (an architect) was an instructor at Northwestern, teaching planning and project management. They were each enthusiastic about the topic of Horwood’s presentation: making maps with computers and building an ‘information system’ for applications like urban planning. Tomlinson found a set of colleagues who understood what he wanted to accomplish. The event in Chicago was a second in a series that established what became the ‘Urban and Regional Information Systems Association’ (URISA) – in the early 1960s, ‘information system’ was in the air.

The emergence of information systems connects to a larger history of advances in computing. While the computers of 1962 look pretty primitive (and limited) to current eyes, the doubling transistor density of Moore’s Law was already well established. Computer tools were changing many work practices, and a few early adopters in the academic sector often had a front-row seat to these innovations. Horwood’s 

center at University of Washington offered cities access to a computer through the mail (centralized service center). Howard Fisher saw the potential for a more generic package of software that could be disseminated (distributed network). His goal was to ‘see things together’ – meaning display techniques to combine spatial distributions. He obtained funding from Ford Foundation andfound a home for it at Harvard University. He founded the Harvard Laboratory for Computer Graphics in 1965 (Chrisman, 2006).

Seeing things together was not a new concept in the discipline of landscape architecture. The use of map overlays to establish relationships in the landscape had a long history, dating back to Manning’s Billerica Town Plan in 1913. In the 1960s, popular emphasis on ecological issues brought this tradition into the public eye in the work of Philip Lewis in Wisconsin and Ian McHarg in New York. McHarg’s (1969) book focused much attention on map overlay, the key component of GIS development at the time.

The discipline of geography also had its own parallel development of mapping and spatial information management. In the 1950s, a particularly active collection of PhD students converged on University of Washington. William Garrison did not supervise all of them, but they worked together on various ways to apply statistical thinking and computing techniques to geographical analysis. Waldo Tobler worked with John Sherman on the cartographic side. Tobler (1959) laid out the issues to convert the photographic technologies of cartography into a digital era to come. Duane Marble, along with Brian Berry, Peter Gould and John Nystuen (and many others) participated in what came to be called as a quantitative revolution, a generation of innovators dedicated to overthrowing dominant paradigms. Of course, this movement became a paradigm to be overthrown later in turn. By 1968, however, the quantitative movement had swept the field of geography. The movement extended far beyond University of Washington with outposts around the world. The general enthusiasm for “Models in Geography” (the title of a collection of chapters edited by Peter Haggett and Richard Chorley (1967)) demonstrates the theoretical vision of the period, much of which still informs the fundamental questions of GIS&T. At the time, however, the quantitative developments focus on applying standard statistical techniques to geographic data. Data resources were scarce, so researchers were forced to work at aggregate levels, leading to substantial revision of these techniques with the better understanding of spatial autocorrelation and the Modifiable Areal Unit Problem – in coming decades.

In many disciplines, therefore, there is a background of ‘origins’ of what became GIS&T. Each group had its own path, but there was much convergence and substantial communication and collaboration. The early days of GIS&T development show this interdisciplinary ferment.


2. Academic Centers of Innovation

Canada and the United States were not the only countries entering the computer era. Geographers took up programming in Sweden, Germany, France and Israel (to mention a few). This account will leave those developments aside for another treatment. Strong leadership by David Bickmore led to the creation of the Experimental Cartographic Unit (ECU) at Royal College of Art (UK) in 1966/67, based on earlier development work on equipment like the free-cursor digitizing table and digitally controlled plotters (some of this work at University of Glasgow). As Rhind (1988) recounts from personal experience, ECU tried to leap into full map production, spending large sums on projects with dubious chances of success. The staff included a dozen persons from many diverse backgrounds, trying to coax finished maps from electronic equipment under computer control. ECU attempted to work as a development facility for national mapping organizations including Ordnance Survey and British Geological Survey. While ECU may have provided some impetus for future development, few traces remain in current technologies.

Founded slightly earlier, the Harvard Laboratory for Computer Graphics and Spatial Analysis (LCGSA) took a different direction from the start. In the proposal to Ford Foundation (1965), Howard Fisher aimed to distribute software to allow others to make maps quickly without needing complicated programming intervention. While software is now seen as a huge industry, it was very much an afterthought in the 1960s. The computing giant IBM gave away much of its software to encourage more use of the equipment. Other groups, such as the Kansas Geological Survey distributed subroutines for certain functions in automated mapping and surface interpolation. LCGSA built ‘packages’ of software – originally SYMAP, a package to make maps on computer line printers. By 1970, over 500 institutions had purchased SYMAP for use on university computer centers as well as government agencies and corporations (Chrisman, 2006). SYMAP output was not a thing of beauty like the output from ECU, but it provided a first introduction to computer applications for a large number of students and professionals.

LCGSA (or just ‘the Lab’) collected a diverse spectrum of professionals, funded by many different sources (Chrisman, 2006). Fisher’s architecture discipline saw some attention, leading toward some early minicomputer systems for architectural design, but that was a minor focus of the Lab. Director William Warntz took a direction toward “Theoretical Geography” that employed mathematics as much as computers. Warntz focused on surfaces, building understanding of the topology of topography (peaks, pits, passes, ridges) and geodesics (least-cost paths) across the surface of any geographic distribution. A group of landscape architects (led by Carl Steinitz and Peter Rogers) took a more practical approach to building landscape inventories based on grid cell databases. Their software developments, through many iterations led to the MAP package in later years. This group implemented a least-cost path algorithm that remains in general use for many planning applications. Many of the students trained at the Lab moved on to important roles in the developing commercial sector (including Jack Dangermond, founder of ESRI, and Lawrie Jordan and Bruce Rado, founders of ERDAS).

After a crisis of funding around 1970, LCGSA rebuilt to even larger critical mass. This second period culminated with the development of ODYSSEY, a prototype package for GIS data processing based on a topological vector model. LCGSA was one of many academic institutions where innovation occurred in this period, but the conference series Harvard Graphics Week (1979-83) set a style for hundreds of participants to share their developments in computer cartography and early GIS implementations. As Tomlinson and Boyle (1981) reported, the commercial sector was just not prepared to deliver a full set of GIS functions. Of course, neither was ODYSSEY or any other academic software package. The 1980s saw a shift as the focus of software development moved to the commercial sector where it was quickly demonstrated that a full set of functions could be obtained by the mid-1980s.

The academic sector also played a critical role in dissemination of the GIS technology, based on pilot projects at many institutions. As GIS developed, the consulting sector took over much of this role. These projects demonstrated the need for institutional reforms and an awareness of broader issues than just algorithms and information systems design.


3. Parallel Worlds

The disciplines involved in GIS&T – in its current expanse – were not all aware of their relationship in the past. The world of GIS development often stayed close to its origins in geography and cartography. In the same period, big advances were being made in what we now count as related fields. For example, geodesy made huge advances due to new surveying equipment, advances in computing and eventually geopositioning satellites. The major US center of geodetic research at Ohio State University was better connected internationally than within the rest of the academic sector in the country.

The 1970s also saw the first appearance of earth-observing satellites (ERTS-1 – later Landsat - launched in 1972). Many academics and other professionals rushed to work with the new data streams. Though there was some overlap, the issues of remote sensing required different skills and supported different applications (mostly in natural resources). Some well-funded centers, like Jet Propulsion Lab at CalTech, developed elaborate software for image processing, providing at least a proof of concept for later commercial developments. While mainstream production cartography stayed in the vector world, the raster world developed its own professional venues and associations. The optical technology of photogrammetry had developed an academic presence largely in civil engineering (in the US). The fundamental approach dealt with each image taken at a particular time, rather than the concept of a single entity in a geographic data base enduring over time. Both approaches are important to GIS&T, but they were hard to reconcile in the early days of limited memory and slow computers.

Commercial software based on the raster model also developed a bit earlier, as well. A version of the IMGRID software from Harvard was developed at Georgia Tech into the product commercialized by ERDAS in 1980 (Faust, 1998).


4. Changing Roles

From the 1970s into the 1980s, the academic sector had taken the lead in innovations, often in concert with major national mapping and surveying organizations. The balance shifted as the commercial software sector began to take shape. The initial market for GIS implementations did not look huge; one system for each US state – some kind of natural resource inventory, and similar penetration worldwide. Of course, this was a gross underestimate for the pervasive demand for geographic information systems.

The increasing demand for trained staff for all the new GIS developments fueled expansion of the academic programs at first in US, then worldwide. In parallel, the research sector became better organized, with new journals and more careful peer-review of articles. Following an exciting competition, a consortium of three US universities established a National Center for Geographic Information and Analysis (1989) with funding from the National Science Foundation. This institution had the resources to convene researchers from around the world to focus on a series of initiatives that spanned disciplines and raised the standard of research. NCGIA also worked to establish a ‘core curriculum’ with each segment contributed by specialists around the country and the world. Though rarely delivered exactly according to plan, it provided the starting point for many academic initiatives.

NCGIA reached out to a number of related disciplines, from spatial statistics to cognitive psychology and philosophy. The initiative structure often mobilized some degree of engagement, but did not provide a structure for longer term collaborations. In concert with non-NCGIA institutions, US universities organized the University Consortium for Geographic Information Science (www.ucgis.org). Currently there are over 60 member universities. The principle is that each campus operates its own internal network across all relevant disciplines, though geography remains a lead in most cases. No one sector has the sole key to innovation. The success of GIS&T development has always depended on cross boundaries and enlisting new partnerships.


Chrisman, N.R. (2005). Communities of scholars: Places of leverage in the history of automated cartography. Cartography and Geographic Information Science, 32(4), 425-433. DOI: 10.1559/152304005775194674.

Chrisman, N. R. (2006). Charting the unknown: How automated mapping became GIS at the Harvard Lab. Redlands, CA: ESRI Press.

Faust, N. (1998). Raster-based GIS. In T. Foresman (Ed.) The History of Geographic Information Systems: Perspectives of the pioneers (pp. 59-72). Upper Saddle River, NJ: Prentice Hall.

Haggett, P. & Chorley, R. (1967). Models in Geography. London: Methuen.

Horwood, E., Rogers, C., Rom, A. R. M., Olsonoski, N., Clark, W. L.  & Weitz, S. (1963). Computer methods of graphing, data positioning and symbolic mapping: A manual for user professionals in urban analysis and related fields. University of Washington, Department of Civil Engineering.

McHarg, I. L. (1969). Design with nature. Garden City, NY: Natural History Press.

National Center for Geographic Information and Analysis (NCGIA). (1989). The research plan of the National Center for Geographic Information and Analysis. International Journal of Geographical Information System, 3(2), 117-136. DOI: 10.1080/02693798908941502

Rhind, D. (1988) Personality as a Factor in the Development of a Discipline: The Example of Computer-Assisted Cartography, American Cartographer, 15(3), 277-289, DOI: 10.1559/152304088783886928

Tobler, W. R. (1959). Automation and cartography. Geographical Review, 49, 526-34.

Tomlinson, R. F. (1968). A geographic information system for regional planning. In G. A. Stewart (Ed.), Land Evaluation (pp. 200-210). Melbourne, Australia: Macmillan.

Tomlinson, R.F. & Boyle, A.R. (1981). The state of development of systems for handling natural resources inventory data. Cartographica 18: 65-95.

Learning Objectives: 
  • Identify the key academic disciplines that contributed to the development of GIS&T
  • Evaluate the role that the Quantitative Revolution in geography played in the development of GIS&T
  • Discuss the contributions of early academic centers of GIS&T research and development (e.g., Harvard Laboratory for Computer Graphics, UK Experimental Cartography Unit)
  • Understand the major shifts in research foci during the 1960s, 1970s, 1980s
Instructional Assessment Questions: 
  1. How would you try to implement computer mapping software on a computer with a total memory of 32K (K not Mb)?
  2. Is the rift between raster and vector models still visible in your current software packages?
  3. How have geopositioning satellites changed the availability of detailed spatial data?
  4. Where would you go to engage with other people in this growing sector? (to an academic conference? to a commercial vendor user conference? to an online dialog/ user group?) How has this changed over the past decades?