CV-38 - Usability Engineering & Evaluation

You are currently viewing an archived version of Topic Usability Engineering & Evaluation. If updates or revisions have been published you can find them at Usability Engineering & Evaluation.

In this entry, we introduce tenets of usability engineering (UE) and user-centered design (UCD), interrelated approaches to ensuring that a map or visualization works for the target use. After a general introduction to these concepts and processes, we then discuss treatment of UE and UCD in research on cartography and geographic visualization. Finally, we present a classification of UE evaluation methods, including a general overview of each category of method and their application to cartographic user research.  

Author and Citation Info: 

Ooms, K. and Skarlatidou, A. (2018). Usability Engineering and Evaluation. The Geographic Information Science & Technology Body of Knowledge (1st Quarter 2018 Edition), John P. Wilson (ed). DOI:10.22224/gistbok/2018.1.9.

This entry was first published on March 9, 2018. No earlier editions exist. 

Topic Description: 
  1. Definitions
  2. Overview
  3. UE Approaches and Applications
  4. UE Evaluation Methods

 

1. Definitions

Evaluation - the assessment of the extent to which a product, mapping or otherwise, supports user needs

Conceptual development - the outline of a product’s functional requirements prior to product development, as identified from the work domain analysis

Debugging - the process of fixing errors and optimizing code before final release of a product to the end users

Discount usability - a cost-effective approach to usability evaluation that prescribes testing with few participants and early prototypes

Expert-based methods - methods involving non-user participants with a high level of expertise

Field based studies - methods conducted in a real-life setting for the product with real users and use case scenarios

Formative studies - exploratory studies conducted early in design aimed to reveal user needs and product requirements

Laboratory based studies - methods conducted in a controlled setting to simplify the study protocol and avoid confounding issues with data collection and the testing environment

Mixed-methods - the combination of different methods in order to triangulate findings and improve design

Prototyping - the generation of partially-functional product designs to gather feedback during the early stages of development

Qualitative methods - methods that produce non-numerical, descriptive data

Quantitative methods - methods that produce numerical data

Summative studies - confirmatory studies conducted at the end of design to evaluate the product’s performance against criteria established earlier in design

Target user - the end users that will operate the product

Theory based method -methods applied by the product designers that draw on cartography and visualization literature to inform evaluation

Usability engineering - a collection of processes and methods to improve the usability of a product

Usability - the ease of use of a product, taking into account the target user group and tasks 

  • Effectiveness: the ability of the user to complete a task
  • Efficiency:  the speed by which a user can complete a task
  • Error tolerance: the severity and number of errors and the difficulty of recovery when errors occur
  • Learnability: the level of difficulty to learn and start using a product
  • Memorability: the level of difficulty to start using the product again when it was not used for some time
  • Satisfaction: the users’ subjective feelings about their experience with a product; an expression of how they like working with a product

User-based methods - methods soliciting input and feedback from target users of the product

User-centered design - a multi-step and iterative approach to designing a product that acquires input from users (stakeholders, end users, experts, etc.) throughout design

Utility - the usefulness of a product, matching the user goals and tasks to the implemented functionality

Work domain analysis - the process of collection, analysis, and synthesis of the target users’ functional requirements

 

2. Overview

Have you ever worked with an interactive map or visualization where you could not find the function you needed or, where you found the correct function to use, but the results that you got were completely different from those that you expected? These frustrations reflect how well a certain interactive map or visualization is capable of supporting the user in performing specific tasks (e.g., visualizing, analyzing, interpreting, processing data), taking into account the context of use (individual, shared, in an office, in the field, etc.). In this entry, we will first introduce the concepts of usability and usability engineering (UE), with corresponding definitions and application on interactive maps and visualizations. Next a well-established approach that implements UE is further discussed: user-centered design (UCD). We focus on the different stages and its application in research regarding interactive maps and visualizations. Finally, an overview of different methods that are used in UE (and thus also UCD) is presented, including some structures to categorize them.

2.1 Usability and usability engineering

Usability engineering (UE) originated in the field of software development. UE is a toolbox with various principles and methods that can be used in the lifecycle of product and system development, including interactive maps and visualizations. The ultimate goal of UE to the creation of more usable or user friendly products, that are tailored to the actual needs of target end users. IBM in 1981 (Whiteside et al. 1988) and later Apple (Nielsen, 1993) were among the first two companies that established usability laboratories to improve their products.

To understand usability engineering, we need to clarify and explain what the term usability implies. One of the earliest definitions of usability emphasized the importance of effectiveness, learnability, flexibility, and attitude (Shackel 1986). This definition also acknowledged that usability is contextual, based on variables associated with the users, the use environment, and the user tasks as well as the significance of affective elements, such as satisfaction and likeability that are now integrated into the wider user experience (UX; see UI/UX Design) of geospatial technologies.

Furthermore, Nielsen (1993) defined usability as a quality attribute assessing how easy it is to use interfaces and outlines five usability attributes:

  1. Learnability: it should be easy to learn how to work with the system so the user can start using it rapidly;
  2. Efficiency: once the user has learned the system it should be easy to use to achieve specific tasks;
  3. Memorability: when the system is not used for some time it should be easy to return to it without having to learn it again;
  4. Errors: there should be as little errors as possible, from which users can easily recover if they occur;
  5. Satisfaction: the users should be satisfied when using the system.

During the same period, Nielsen (1992) and Bellcore argued for a shift towards a more engineering-focused approach to usability, urging for iterative testing and the use of standardized discount usability approaches in product evaluations. Within this context, usability engineering has been described as a set of methods for investigating and improving system and software usability (Nielsen 1993). In 1998, the International Standards Organization proposed ISO 9241-11 which defines usability as “The effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments” (ISO 1998, p.2). In this definition, effectiveness is the ability of the user to complete a task, efficiency refers to the time and cognitive effort that it is required to complete a task, and satisfaction is used to describe the user response after task completion.

In line with this latter definition, Roth et al. (2015) synthesized these definitions into three tightly-related components that define an interface's success: usability, utility and users. Thus, besides usability, considering Nielsen’s (1993) definition, the authors also focused on utility and the user. Utility refers to ‘the specified goals’ in the ISO definition, thus taking into account what the user’s tasks and goals are with the system. This is inherently linked to the characteristics of the user him/herself and, therefore, it is essential to define a target user group: the ‘specified users’ in the ISO definition.

As an illustration of these concepts, two screenshots of well-known products in the field of cartography and GIS are shown in Figure 1: Esri’s ArcMap and Google Maps. These popular interactive maps represent two different use cases in cartography and GIS. The number of features (utility) implemented in ArcMap is much greater than in Google Maps. Google Maps allows users to view the basemap with some basic interaction tools (zooming, panning, switching to a satellite view, showing pictures, enabling streetview, querying information, etc.). In ArcMap, the user also can add – atop a basemap – multiple (raster and vector) layers and, a large collection of tools is available to process the data: reprojecting the data layers, edit vector data, advanced network analysis, spatial analysis, 3D analyses, etc. The usability of these systems is dependent on the third factor: the characteristics and goals of the target user. A user who just wants to find the shortest route from his holiday park to the nearest shop will not be able to do this efficiently in ArcGIS because of the overload of tools that are offered from which the appropriate one has to be selected. There is thus a much steeper learning curve associated with ArcMap compared to Google Maps (learnability), resulting in a lower usability of ArcMap for this user and task. On the other hand, if a geospatial analyst is asked to calculated the distribution of shop categories and their accessibility at a certain distance around a holiday park, the usability of Google Maps will be much lower because of its lack in utility

 

Figure 1: Comparing the utility-usability-user trade-off for two well-known interactive maps in cartography and GIS: Esri’s ArcMap (top) and Google Maps (bottom). 

 

2.2 UE in cartography and geovisualization

Usability engineering in cartography and geovisualization can be traced to the 1970s, with preliminary research on the usability of geospatial technologies on human spatial cognition and interaction with maps. The early 1990s saw a growing interest in the topic (see Haklay 2010), with usability in this context referring to a number of critical components that influence how people interact with geospatial systems that include: the users (e.g. Montello 2009; van Elzakker & Griffin 2013); the maps (e.g. Haklay 2010); and the user-map interaction (e.g. Nivala et al. 2008).

 

3. UE approaches and applications

3.1 User-centered design

The consideration of human capabilities and other human characteristics in the design of computerized systems (Nickerson 1969) gave birth to a series of approaches and research practices that address these issues. Having its roots in ergonomics and human factors, user-centered design (UCD) was first established in the 1980s as a philosophy and design methodology placing users at the center of the product development process (Norman & Draper 1986). The users are engaged with the design in similar ways to participatory research, providing input into the early conceptualization of the product and feedback on each design iteration. Norman (1988) focuses on the importance of understanding the user’s needs, and the elicitation of their requirements to the usability of the design. Therefore, he suggests that in UCD users are involved from the very early stages of product conceptualization and gathering of user requirements to iterative testing and evaluation.

Figure 2. A general overview of the UCD approach (adapted from Ooms, 2016).

 

Over the years, several approaches or processes have been recommended in cartography and GIScience. These all take the form of a number of iterative stages going from gathering requirements, designing an initial prototype, analyzing and refining the design to a final product (see Figure 2). Slocum et al. (2003) proposed a process which consists of six steps, which was later revised by Robinson et al. (2005) and Roth et al. (2015) to include: (i) work domain analysis (user requirements, needs assessment), (ii) conceptual development, (iii) prototyping, (iv) interaction and usability studies (iterative evaluation), (v) implementation (product development) and (vi) debugging.

The first two steps focus on defining the target users, what tasks the application should support, and which features should be included for these target users: the utility of the system. The importance of these steps - in which the requirements of the systems are gathered - is stressed by van Elzakker and Wealands (2007). After a number of iterative stages over steps (iii) and (iv) above, a fully operational system is implemented, which is again iteratively evaluated to improve its usability.

Nivala et al. (2007) studied the familiarity of map makers with the techniques used in usability engineering and their suitability to evaluate (screen) map designs. They concluded that most map making companies are interested in applying UCD, but that they lack the knowledge on how to implement the approach, including the different evaluation techniques. Nevertheless, UCD increasingly has been applied for the design and development of interactive maps and visualizations supporting cartographic research (see next section). But van Elzakker & Griffin (2013) still stress the importance of involving users during a product’s development, including their requirements and (cognitive) capabilities. In this context, UCD training is essential for cartography and visualization.

3.2 UCD applications in cartographic research

UCD has been applied in a wide variety of cartographic applications: desktop applications, web mapping, virtual environments, collaborative environments, mobile mapping, etc. (see also Haklay 2010). The focus should not only be on the usability of the cartographic product itself, but on all aspects of Geographic Information Technology applied (usability of GI, databases, methods of data collection, hardware, software, interfaces, etc.). The structure of the final system and thus the type of application that is most appropriate is derived from the requirements analysis (i.e. in what Roth et al., 2005 describe as in steps (i) and (ii)). Below we present an overview topics within this Body of Knowledge for which a UCD approach has been used to enhance the related interactive maps and visualizations: Web Mapping, Geovisualization (forthcoming), Geovisual Analytics, Virtual and Immersive Environments (forthcoming), Mobile Mapping and Responsive Design (forthcoming), Geocollaboration (forthcoming), User Interface and User Experience Design (e.g. Roth 2015; Schobesberger 2012; Delikostidis 2011; Skarlatidou & Haklay 2006).

 

4. UE Evaluation Methods

There is a variety of UE methods, exploited from various disciplines, that can be used at different stages of the system/product design and development (e.g. UCD), which serve different purposes. Several methodological taxonomies have been proposed in the literature that are briefly reviewed in this section.

Besides selecting appropriate UE methods that serve the purposes and aims of the wider methodological framework, it is equally important that the methodological design and the results are reported in a consistent manner. Reporting of the study design and its results in a consistent manner is essential in the light of the transferability, reliability, generalizability, and reproducibility of scientific research (see Cartography & Science). This description should consist of three main elements: information about who the participants are, the materials used (e.g. prototype, cartographic product), and the procedure followed (e.g. methods applied, user tasks). It should be noted that if user testing is one of the methods used, then a critical methodological concern is related to the number of participants that should be recruited to ensure the reliability and objectivity of findings. Such a decision depends on several factors (e.g. whether data will be statistically validated or not) but within discount usability engineering, Nielsen and Landauer (1993) have demonstrated that testing with five participants can yield enough insight into interaction problems; while after the ninth participant, the results become repetitive.

A distinction between qualitative and quantitative methods is  perhaps the most commonly used  way to refer to methods that are used to respectively collect qualitative (i.e., mainly descriptive data that are mostly in a textual or other non-numerical form) and quantitative (i.e., mainly numerical data) data. In UE, qualitative and quantitative methods are equally important, although it is common for many studies to mix them in order to effectively answer the underlying research questions. Quantitative methods in Human-Computer Interaction (HCI) research have been traditionally used in controlled experiments and hypothesis testing, although they are not limited to only these situations. On the other hand, qualitative methods have recieved increasing attention, as they provide in-depth understanding of how users interact with the technology of interest in particular contexts of use. Popular qualitative methods in UE involve the so-called ‘think aloud’ study and participant observation, while methods such as eye-tracking are mostly used to collect quantitative data. Other methods, such as interviews, enable the collection of both qualitative and quantitative data; for example, questionnaires can be used to collect user demographics (e.g., age, rankings of preferences, frequencies) and Likert questions, but also for collecting open data which describe feelings, opinions, and experiences.

Another popular distinction of HCI methods or study designs is between formative and summative studies, which references the stage of the UCD in which the method takes place. Formative - or exploratory - studies conducted at the beginning of the UCD process in order to define the target users’ profiles and the product’s utility (steps (i) and (ii) of the UCD process in Roth et al., 2015). The focus is on understanding the human-computer interactions with the product, which includes discussing and testing prototypes (step (iii) in the UCD process). Summative - or assessment - studies are conducted in later stages of the UCD process (steps (iv) to (vi) in Roth et al., 2015)) when the structure of  the product  has been defined. The focus is on quantitative results: statistically comparing user performance between different configurations (interaction device, interface components, etc.) or against benchmarks as a quality assurance.

Another classification in UE focuses not on particular methods, but the context in which the methods are used (e.g. Carpendale 2008). Thus, laboratory based studies are characterized by their controlled nature, ensuring a high level of reliability and repeatability of the results. Nevertheless, they often lack realism: the product is not used in the proper context. However, when using the cartographic product in a realistic context – field-based studies –  there are many (unforeseen) elements that might influence the results (different lighting conditions, noise, smell, interaction with other persons, moving objects, etc.). Consequently, a high ecological validity can thus jeopardize the reliability of the obtained measurements.

The last taxonomy differentiates UE methods between expert-based, user-based, and theory-based methods. Expert-based methods support an evaluation that it is carried out by experts and are used to expose interaction problems (i.e., usability problems) associated with the user interface design. This category includes methods such as guideline review, heuristic evaluation, cognitive walkthrough, and consistency inspection. Of those predictive evaluations, heuristic evaluations and cognitive walkthroughs are some of the most popular (explained in more detail in Table 2). User-based methods involve the recruitment of real users and facilitate an understanding of their difficulties through observation as they interact with the system and as they think aloud. User-based methods do not only help to detect usability deficiencies, but can also support the development of innovative and creative solutions. There are several methods which may support or assume the involvement of real users and the most popular is usability user testing. Roth et al. (2015) also add a third category, namely theory-based methods, such as scenario-based design, consultation of secondary sources, and automated evaluations. Table 2 provides an overview of some of the most popular methods used for cartography and visualization. For a preliminary overview and example of how the methods can be implemented in the cartographic context, refer to Skarlatidou et al. (2010).

 
Table 1: Overview of some of the most popular UE methods
UE Method Literature: Application in cartography and geovisualization
Questionnaire & Survey Haklay and Zafiri 2008; Allison et al. 2016

General characteristics:

  • qualitative and quantitative
  • open vs. closed questions
  • User Interface Satisfaction questionnaire (QUIS); Perceived Usefulness and Ease of Use (PUEU)

Insights in…

  • demographic information
  • perceived usability or user experience

Challenges:

  • formulation of questions
  • cross-verification of answers
Eye Tracking Fabrikant, et al. 2008; Ooms et al. 2012; Popelka and Brychtova 2013

General characteristics:

  • qualitative and quantitative
  • records the participant’s Point of Regard at a certain sampling rate
  • aggregation into metrics related to fixations and saccades

Insights in…

  • where participants are (not) looking
  • how visual information is processed by the user

Challenges:

  • large amount of data
  • interpretation of the data
  • recruiting participants
Thinking Aloud van Elzakker 2004; Flink et al. 2011.; Ooms et al. 2015

General characteristics:

  • think out loud
  • transcription of verbal utterances
  • protocol analysis

Insights in…

  • direct, unfiltered thoughts
  • cognitive reasoning
  • participants’ needs and expectations

Challenges:

  • recruiting a representative sample of participants
  • influences users’ efficiency
  • long processing times
  • users might be hesitant to talk
Usability User Testing - Observation Skarlatidou and Haklay 2006; Nivala et al. 2008

General characteristics:

  • involvement of real users
  • starts with good understanding of the system, the supported tasks and its end users
  • performance of realistic tasks with the system
  • evaluator observes participants

Insights in…

  • how participants work with the system
  • user behavior

Challenges:

  • definition of realistic tasks with the system
  • availability of an appropriate system
Cognitive Walkthrough Skarlatidou et al. 2010; Savage et al. 2012; Brown et al. 2013

General characteristics:

  • simulates the user's’ problem solving practice
  • takes the cognitive and affective processing of the end user into account
  • requires expert or novice evaluators
  • participants impersonate the needs, goals and tasks of a potential user, using persona- based scenarios

Insights in…

  • which User Interface elements may pose difficulties to the user
  • severity rating for each problem

Challenges:

  • participants have to impersonate a potential user
Heuristic Evaluation Skarlatidou et al. 2010; Brown et al. 2013

General characteristics:

  • one of the most commonly used inspection methods
  • evaluators inspect the User Interface based on a list of heuristics
  • heuristics incorporate interaction elements: visibility of system status, match between the system and the real world, help and documentation, error prevention, ...

Insights in…

  • potential interface issues

Challenges:

  • definition of heuristics
  • interpretation of heuristics
  • finding appropriate expert evaluators
Focus Group Monmonier and Gluck 1994; Harrower et al. 2000

General characteristics:

  • qualitative data
  • discuss new concepts or identify issues in a small group

Insights in…

  • user requirements and expectations

Challenges:

  • recruiting a representative sample of participants
  • moderating the discussion
Interview Slocum et al. 2004

General characteristics:

  • qualitative data
  • structured/semi- structured/unstructured

Insights in…

  • user requirements and expectations
  • user behavior (discussed after experiment)

Challenges:

  • moderating the interview
  • access to participants

 

It is good practice to combine multiple methods to optimize evaluation realism, reliability, and validity (e.g. Bleish 2011). Realism - or internal validity - reflects how well real-life situations are implemented in the experiment’s design. Reliability is related to the consistency of the findings and thus the repeatability of the measurements. Obtaining the same results from different methods strengthens the reliability of these findings. Finally, external validity - or generalizability - refers to the applicability of the experiment and its results to other contexts. An overview of how a mixed-methods approach can be applied for UCD of interactive maps and visualizations is provided by Ooms (2016). A first strategy is to combine methods that complement each other in such a way that that their limitations are covered: the limitation or weakness of the first methods is targeted by the second methods in order to be able to measure a broader spectrum of variables and thus derive more solid conclusions. As a second strategy, you can combine methods that measure exactly the same factor, which serves as a measure of the reliability of the experiment and the recorded data. Finally, different factors can be targeted when combining different methods that not necessarily cover each other’s limitations, but provide data from which - when combined - additional insights can be derived. As such new findings are triangulated across the different data sources.

References: 

Allison, C., Treves, R. and Redhead, E. (2016). The Usability of Online Data Maps: An Ongoing Web Based Questionnaire Investigation into Users’ Understanding and Preference for Geo-Spatial Visualisations, Proceedings of GIS Research UK, 3 April 2013, Liverpool, UK.

Bleisch, S. (2011). Evaluating the appropriateness of visually combining abstract quantitative data representations with 3D desktop virtual environments using mixed methods. City University London.

Brown, M., Sharples, S., and Harding, J. (2013). Introducing PEGI: a usability process for the practical evaluation of Geographic Information. International Journal of Human-Computer Studies, 71 (6). pp. 668-678.

Delikostidis, I. (2011). Improving the usability of pedestrian navigation systems. Enschede, University of Twente, Faculty of Geo-Information and Earth Observation (ITC).

Fabrikant, S. I., Rebich-Hespanha, S., Andrienko, N., Andrienko, G.,and Montello, D. R. (2008). Novel method to measure inference affordance in static small-multiple map displays representing dynamic processes. The Cartographic Journal, 45(3), 201-215.

Flink, H. M., Oksanen, J., Pyysalo, U., Rönneberg, M., and Sarjakoski, L. T. (2011). Usability evaluation of a map-based multi-publishing service. Advances in Cartography and GIScience. Volume 1, pp. 239-257.

Haklay, M. (2010). Interacting with geospatial technologies. UK: John Wiley & Sons.

Haklay, M.and Zafiri, A. (2008). Usability Engineering for GIS: Learning from a screenshot. The Cartographic Journal, 45 (2), pp. 87-97.

Harrower, M., MacEachren, A., and Griffin, A. L. (2000). Developing a geographic visualization tool to support earth science learning. Cartography and Geographic Information Science, 27(4), 279-293.

ISO. (1998). ISO9241-11: Ergonomic requirements for office work with visual display terminals (VDTs) Part 11: Guidance on usability.

Lu, Y-T, Ellul, C. and Skarlatidou, A. (2016). Preliminary Investigations into usability of 3D Environments for 2D GIS. Proceedings of the 24th GIS Research UK, 3 April, 2016, Greenwich, UK

Monmonier, M., and Gluck, M. (1994). Focus groups for design improvement in dynamic cartography. Cartography and Geographic Information Systems, 21(1), pp. 37-47.

Montello, D. R. (2009). Cognitive research in GIScience: Recent achievements and future prospects.Geography Compass, 3(5), 1824-1840.

Nickerson, R. (1969). Man-Computer interaction: A challenge for human factors research. Ergonomics, 12: 501–517. (Reprinted from IEEE Transactions on Man-Machine Systems, 10(4), p. 164–180.

Nielsen, J. (1993). Usability Engineering. San Francisco: Morgan Kaufmann.

Nielsen, J. and Bellcore, M. (1992). The usability engineering lifecycle. Computer 25 (3), 12-22.

Nielsen, J. and Landauer, T. K. (1993). A mathematical model of the finding of usability problems. In: Proceedings of the INTERCHI Conference on Human Factors in Computing Systems (CHI'93), Amsterdam, The Netherlands, 24-29 April 1993, pp. 206-213.

Nivala, A.-M. (2007). Usability Perspectives for the Design of Interactive Maps, Publications of the Finnish Geodetic Institute, PhD thesis.

Nivala, A.-M., Sarjakoski, L. T., and Sarjakoski, T. (2007). Usability methods' familiarity among map application developers.International Journal of Human-Computer Studies, 65(9), 784-795.

Nivala, A.-M., Brewster, S. and Sarjakoski, T. (2008). Usability Evaluation of Web Mapping sites. The Cartographic Journal, 45 (2), pp. 129-138.

Norman, D. (1988). The Design of Everyday Things. New York: Basic Books. ISBN 978-0-465- 06710-7.

Norman, D. and Draper, S. (1986). User Centered System Design; New Perspectives on Human- Computer Interaction. L. Erlbaum Assoc. Inc., Hillsdale, NJ, USA.

Ooms, K. (2016). Cartographic User Research in the 21st Century:Mixing and Interacting. In: 6th International Conference on Cartography and GIS

Ooms, K., Andrienko, G., Andrienko, N., De Maeyer, P., and Fack, V. (2012). Analysing the spatial dimension of eye movement data using a visual analytic approach. Expert Systems with Applications, 39(1), 1324-1332.

Ooms, K., De Maeyer, P., and Fack, V. (2015). Listen to the map user: Cognition, memory, and expertise. The Cartographic Journal. 52(1), 3-19.

Popelka, S., and Brychtova, A. (2013). Eye-tracking study on different perception of 2D and 3D terrain visualisation. The Cartographic Journal, 50(3), 240-246.

Robinson, A. C., J. Chen, E. J. Lengerich, H. G. Meyer, and A. M. MacEachren. (2005). Combining usability techniques to design geovisualization tools for epidemiology. Cartography and Geographic Information Science 32 (4):243-255.

Roth, R. E., Ross, K. S., and MacEachren, A. M. (2015). User-centered design for interactive maps: A case study in crime analysis. ISPRS International Journal of Geo-Information, 4(1), 262-301.

Schobesberger, D. (2012). Towards a Framework for Improving the Usability of Web-mapping Products. PhD Thesis. University of Vienna.

Shackel, B. (1986). Ergonomics in design for usability. In: Conference of British Computer Society Human Computer Interaction Specialist Group. Cambridge University Press, 44-64.

Skarlatidou, A. and Haklay, M. (2006). Public Web Mapping: Preliminary Usability Evaluation, GIS Research UK 2005, Nottingham.

Skarlatidou, A., Haklay, M., and Cheng, T., (2010). Preliminary Investigation of Web GIS Trust: The Example of the “WIYBY” Website, Proceedings of Joint International Conference on Theory, Data Handling and Modelling in GeoSpatial Information Science, Hong Kong. 26-28 May, 2010.

Slocum,  T.A.;  Sluter,  R.S.;  Kessler,  F.C.  and  Yoder,  S.C.  (2004).  A  qualitative  evaluation  of MapTime, a program for exploring spatiotemporal point data. Cartographica. 59, 43–68

van Elzakker, C. P. J. M. (2004). The use of maps in the exploration of geographic data. University of Utrecht, ITC Dissertation 116, Utrechtse Geografische Studies 326.

van Elzakker, C. P. J. M., and Griffin, A. L. (2013). Focus on geoinformation users: Cognitive and use/user issues in contemporary cartography. GIM International, 27(8), 20-23.

van Elzakker, C. P. J. M., and Wealands, K. (2007). Use and Users of Multimedia Cartography. In W. Cartwright, M. Peterson and G. Gartner (Eds.), Multimedia Cartography (pp. 487-504): Springer Berlin Heidelberg.

Whiteside, J., Bennet, J. and Holtzblatt, K. (1988). Usability Engineering: our experience and evolution. In Helander, M. (ed.) Handbook of Human Computer Interaction. Amsterdam: North- Holland.

Learning Objectives: 
  • Describe the differences between usability, utility, and user needs as applied to cartography and visualization.
  • Understand the basics of usability engineering approaches.
  • Compare and contrast different kinds of evaluation methods for cartography and visualization (e.g., qualitative versus quantitative, formative versus summative studies).
  • Determine which methods you can use in a mixed method setting to derive user needs and characteristics for an interactive mapping project.
  • Schedule a user-centered design process for acquiring feedback from target users throughout design and development.
  • Evaluate the usability of an interactive map or visualization according to how the representation and interface features support user stated needs.
  • Design and implement a series of evaluations to (iteratively) evaluate the usability of (geospatial) products.

 

Instructional Assessment Questions: 
  1. What is usability and what is usability engineering?
  2. What are inspection methods and how do they they differ from user-based methods?
  3. What are qualitative methods and how do they differ from quantitative methods?
  4. Why is it a good practice to combine multiple UE methods?
  5. You plan to solicit the requirements of potential users for a new cartographic product (e.g., indoor navigation system). Which methods would you suggest to obtain this information?
  6. You plan to evaluate the usability of a prototype (e.g., indoor navigation system) to learn which visualization would work best (e.g., text descriptions, 2D floor plans, 3D visualization). Which methods would you suggest to obtain the appropriate information?
  7. You aim to evaluate the usability of two alternative user interface designs for a specific target group and set of tasks. Create an experiment with the proper UE methods to evaluate these designs.
  8. Create an experiment with a mixed methods approach for which the reliability of the obtained results is a major concern. Then, create an experiment with a mixed methods approach for which the ecological validity of the obtained results is a major concern.