Vegetation mapping using discrete-return and full-waveform airborne LiDAR data

TitleVegetation mapping using discrete-return and full-waveform airborne LiDAR data
Publication TypeConference Paper
Year of Publication2013
AuthorsChen, Dong, Liqiang Zhang, and Zhen Wang
Refereed DesignationRefereed
Conference NameGeomorphometry 2013
Date Published2013
Conference LocationNanjing, China
AbstractAs a data source for measuring elevations, light detection and ranging (LiDAR) technology can provide dense and accurate three-dimensional (3D) coordinates, multi-echo pulses and intensities with high horizontal and elevation accuracy. Discrete-return airborne LiDAR systems have improved significantly, with the point density of LiDAR data reaching 10-20 points/m2 and, in some cases even 50 points/m2 (Vosselman, 2008; Höfle et al., 2012). In addition, technical improvements in electronic devices for rapidly digitizing and storing mass data have pushed airborne LiDAR systems into a new stage by digitally sampling the entire laser pulse echo as a function of time. The characteristic amplitude profile of the recorded backscattered signal is usually referred to as full-waveform. High mean density point clouds derived from discrete-return systems and full-waveform techniques make it possible to detect ground objects, especially vegetation (e.g., trees and shrubs). Accurate extraction of vegetation point clouds has several basic applications for generating DTMs (Ullrich et al., 2007), establishing or updating urban tree registers (Vosselman, 2005), 3D vegetation reconstruction and visualization in digital urban landscapes, and forest stand structure indices inversion (e.g., above-ground biomass, mean canopy height, average basal area, breast height diameter, leaf area index, etc.). These forest stand structure indices can be used to estimate carbon storage capability and even indirectly evaluation the ecological health of both urban and forest areas. This paper aims to accurately extract vegetation point clouds from discrete-return and full-waveform LiDAR data. At pre-processing, the full-waveform processing technique is first proposed to generate dense 3D point clouds. This technique mainly includes two crucial steps: (1) full-waveform decomposition. This step is to search the maximum number of return echoes which stand for diversity backscattering objects within a laser footprint. (2) full-waveform modeling. This step aims to find the suitable parametric function to fit each return echo detected by the waveform decomposition. The above two issues have been transformed into the non-linear least squares optimization problem. The kernel functions such as the Gaussian, general Gaussian and Nakagami kernels are integrated into our energy function, in which the Gaussian and general Gaussian kernels are used to fit symmetrical reflected echoes, while Nakagami function is employed to model both left-skewed and right-skewed echoes. The reversible jump Markov chain Monte Carlo (MCMC) sampler, coupled with simulated annealing is adopted to find an optimal solution. Through the full-waveform decomposition and modeling, the discrete 3D point clouds and significant full-waveform features (e.g., amplitude, echo width, backscatter cross-section, backscatter cross-section per illuminated area and backscatter coefficient) are obtained for the segmentation and classification purposes. Considering the laser pulse is affected by targets (e.g., birds or aircrafts), multi-path errors and errors in the laser range finder (Sithole and Vosselaman, 2004), there are high and low outliers in the point clouds that are directly digitalized by the discrete LiDAR system and generated from full-waveform decomposition. Histogram analyses are employed to recognize the elevation distribution of the point clouds, thus the outliers are removed in advance. For the laser pulse that contacts non-extended targets (e.g., sparse vegetation canopy, tree branches or building step edges and ridge edges, etc.), at least two echoes would be generated: one echo returns from the tree or building edges, and the other most likely returns from the ground. In contrast, for the laser pulse that contacts the area-extended targets (e.g., the ground, building rooftop primitives, etc.), only one return echo would be generated. Based on the above principle, the preliminary vegetation point clouds can be extracted in advance and they provide the following information: (i) the approximate vegetation distribution at the macro-level scale of the study areas, referred to as the prior knowledge; (ii) semantic information for feature selection to assist in unlabeled point classification; and (iii) reliable training vegetation points for the subsequent supervised, segment-based SVM classification. Object-based classification approaches (Secord and Zakhor, 2006; Rutzinger et al., 2008; Carlbeg et al., 2009; Höfle et al., 2012) have already proven highly suitable for classification, and astonishing classification results have been achieved from multi-spectral images or 3D airborne LiDAR point clouds. In this research, the pre-segmentation algorithm based on probability density analysis is proposed to generate homogenous segments for a supervised, segment-based SVM classifier, which uses more relevant features derived from geometric, radiometric, multi-echo and full-waveform attributes. Two airborne LiDAR datasets, namely Helsinki and Dayekou were used to evaluate the robustness of the proposed approach in both urban and mountainous areas. The full-waveform decomposition and modeling method gives the users relatively more opportunities to accurately extract group of overlapping echoes or reflected weak echoes originated from non-extended targets. The segment-based classification algorithm exhibits more significant features and robust classification than point-wise or pixel-wise classification. Vegetation points can be well extracted from building boundaries, roof superstructures and the dense vegetation covered regions within a few scattered ground points, where it is very hard to be distinguished by the traditional classification method.
4.333335
Your rating: None Average: 4.3 (3 votes)