Systematic analysis of deep semantic segmentation architecture PSPNet on land cover ISPRS Vinhingen dataset
Abstract
Eiman Kattan
This paper provides a systematic review of Pyramid Scene Parsing Network deep learning semantic segmentation architecture applied to remotely sensed areas in imagery. Firstly, the state-of-the-arts architecture of deep learning for image-based semantic segmentation is reviewed, highlighting its contribution and its significance in the field of image segmentation. Secondly, the ISPRS benchmark dataset (Vaihingen) is used in testing with a detailed experimental setting and analysis of challenges. Then, quantitative results of the pooling layers against pooling type are investigated for the described deep learning architecture, following up with a discussion of the results. Interesting findings are summarised and a recommendation of the wider implementation is also pointed out. The main contribution of the research reveals that the deep learning architecture (PSPNET) can be efficiently applied to land cover classification of remotely sensed imagery with a classification rate up to 0794218%% as an average accuracy of the test set of Vaihingen dataset using four pooling layers against average pooling type which shows superior performance to small object segmentation such as the car class by 0.861777%. Moreover a comparison result of the four pooling layers against max pooling type architectures is also provided by achieving 0.7963976%. From a practical point of view, all the experiments were run using NVIDIA GeForce GTX 1080 Ti GPU. For coding the architectures, Python on tensorflow as the most sophisticated deep learning programming language was used. The implementation of the selected recently developed deep semantic segmentation methods has shown a very high level of detecting efficiency of all the annotation limitations in the evaluated data sets where a revisit is strongly recommended
Descargo de responsabilidad: este resumen se tradujo utilizando herramientas de inteligencia artificial y aún no ha sido revisado ni verificado