Mestre
Francisco Javier Ariza López1 and Alan David Atkinson Gordo2
Abstract: This work presents an analysis of some standard methodologies for positional accuracy assessment of geographic data bases, taking into account aspects like the statistical formulation, the size of the control sample, the distribution and typology of the control elements, etc. Here we point out some weaknesses of the majority of standards: scarce formalism, inappropriate terminology for dealing with uncertainty, minimum recommended sample size, no assessment of the base hypothesis of the statistical model being applied, no information about the statistical behavior and reliability of the method, etc. The analysis developed here can serve as a starting point for the development of improved methodologies. DOI: 10.1061/ ASCE 0733-9453 2008 134:2 45 CE Database subject headings: Standards and codes; Quality control; Accuracy; Methodology; Surveys.
Introduction
The positional accuracy of cartographic products has always been of great importance. It is, together with logical consistency, the quality element of geographic information most extensively used by the national mapping agencies NMAs , and also the more commonly evaluated quality element option Jakobsson and Vauglin 2002 . Positional accuracy is a matter of renewed interest because of the capabilities offered by the global positioning system GPS and the need of a greater spatial interoperability for supporting the spatial data infrastructures. Different positional behaviors of geographic data sets means the existence of an interproduct positional distortion and a barrier to interoperation Church et al. 1998 . This barrier is not only for the positional and geometric aspects, but also for thematic ones which are greatly affected by position Carmel et al. 2006 . For these reasons many NMAs are currently involved in the development of positional accuracy improvement programs