Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PureForest: A Large-Scale Aerial Lidar and Aerial Imagery Dataset for Tree Species Classification in Monospecific Forests (2404.12064v2)

Published 18 Apr 2024 in cs.CV and cs.LG

Abstract: Knowledge of tree species distribution is fundamental to managing forests. New deep learning approaches promise significant accuracy gains for forest mapping, and are becoming a critical tool for mapping multiple tree species at scale. To advance the field, deep learning researchers need large benchmark datasets with high-quality annotations. To this end, we present the PureForest dataset: a large-scale, open, multimodal dataset designed for tree species classification from both Aerial Lidar Scanning (ALS) point clouds and Very High Resolution (VHR) aerial images. Most current public Lidar datasets for tree species classification have low diversity as they only span a small area of a few dozen annotated hectares at most. In contrast, PureForest has 18 tree species grouped into 13 semantic classes, and spans 339 km$2$ across 449 distinct monospecific forests, and is to date the largest and most comprehensive Lidar dataset for the identification of tree species. By making PureForest publicly available, we hope to provide a challenging benchmark dataset to support the development of deep learning approaches for tree species identification from Lidar and/or aerial imagery. In this data paper, we describe the annotation workflow, the dataset, the recommended evaluation methodology, and establish a baseline performance from both 3D and 2D modalities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Ministerial Conference on the Protection of Forests in Europe (FOREST EUROPE), “State of Europe’s forests 2020,” 2020. [Online]. Available: https://foresteurope.org/state-of-europes-forests/
  2. R. J. Keenan, “Climate change impacts and adaptation in forest management: A review,” Annals of Forest Science, vol. 72, pp. 145–167, 3 2015. [Online]. Available: https://doi.org/10.1007/s13595-014-0446-5
  3. Institut national de l’information géographique et forestière (IGN). (2018) BD Forêt [Database]. [Online]. Available: https://geoservices.ign.fr/bdforet
  4. S. Holzwarth, F. Thonfeld, S. Abdullahi, S. Asam, E. D. P. Canova, U. Gessner, J. Huth, T. Kraus, B. Leutner, and C. Kuenzer, “Earth observation based monitoring of forests in germany: A review,” Remote Sensing, vol. 12, p. 3570, 10 2020. [Online]. Available: https://www.mdpi.com/2072-4292/12/21/3570
  5. S. Ahlswede, C. Schulz, C. Gava, P. Helber, B. Bischke, M. Förster, F. Arias, J. Hees, B. Demir, and B. Kleinschmit, “Treesatai benchmark archive: a multi-sensor, multi-label dataset for tree species classification in remote sensing,” Earth System Science Data, vol. 15, pp. 681–695, 2023. [Online]. Available: https://essd.copernicus.org/articles/15/681/2023/
  6. E. R. Lines, K. Calders, M. Miltiadou, S. Puliti, M. Allen, A. Debus, A. Noach, C. Cabo, S. W. D. Grieve, and H. J. F. Owen, “AI applications in forest monitoring need remote sensing benchmark datasets,” Arxiv, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2212.09937
  7. A. Ouaknine, T. Kattenborn, E. Laliberté, and D. Rolnick, “OpenForest: A data catalogue for machine learning in forest monitoring,” Arxiv, 10 2023. [Online]. Available: http://arxiv.org/abs/2311.00277
  8. H. Weiser, J. Schäfer, L. Winiwarter, N. Krašovec, C. Seitz, M. Schimka, K. Anders, D. Baete, A. S. Braz, J. Brand, D. Debroize, P. Kuss, L. L. Martin, A. Mayer, T. Schrempp, L.-M. Schwarz, V. Ulrich, F. E. Fassnacht, and B. Höfle, “Terrestrial, UAV-borne, and airborne laser scanning point clouds of central european forest plots, Germany, with extracted individual trees and manual forest inventory measurements,” 2022. [Online]. Available: https://doi.org/10.1594/PANGAEA.942856
  9. S. Briechle, P. Krzystek, and G. Vosselman, “Classification of tree species and standing dead trees by fusing UAV-based lidar data and multispectral imagery in the 3D deep neural network Pointnet++,” vol. 5.   Copernicus GmbH, 8 2020, pp. 203–210. [Online]. Available: https://isprs-annals.copernicus.org/articles/V-2-2020/203/2020/
  10. M. Liu, Z. Han, Y. Chen, Z. Liu, and Y. Han, “Tree species classification of LiDAR data based on 3D deep learning,” Measurement: Journal of the International Measurement Confederation, vol. 177, 6 2021. [Online]. Available: https://doi.org/10.1016/j.measurement.2021.109301
  11. Y. Lv, Y. Zhang, S. Dong, L. Yang, Z. Zhang, Z. Li, and S. Hu, “A convex hull-based feature descriptor for learning tree species classification from ALS point clouds,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2 2022. [Online]. Available: https://doi.org/10.1109/LGRS.2021.3055773
  12. B. Liu, H. Huang, Y. Su, S. Chen, Z. Li, E. Chen, and X. Tian, “Tree species classification using ground-based LiDAR data by various point cloud deep learning methods,” Remote Sensing, vol. 14, 11 2022. [Online]. Available: https://doi.org/10.3390/rs14225733
  13. S. Graves, J. Gearhart, and S. Marconi, “IDTReeS - ECODSE Competition training set [dataset],” 2017. [Online]. Available: https://zenodo.org/doi/10.5281/zenodo.867645
  14. J.-F. Tremblay, M. Béland, R. Gagnon, P. Giguère, and F. Pomerleau, “The Montmorency Dataset: 3D mapping for robotic forest inventory [dataset],” 2020. [Online]. Available: https://norlab.ulaval.ca/research/montmorencydataset
  15. Institut national de l’information géographique et forestière (IGN), “Inventaire Forestier National - Mémento 2023,” 2023. [Online]. Available: https://inventaire-forestier.ign.fr/IMG/pdf/memento_2023.pdf
  16. ——. (2023) Lidar HD [Database]. [Online]. Available: https://geoservices.ign.fr/documentation/donnees/alti/lidarhd
  17. ——. (2023) ORTHO HR [Database]. [Online]. Available: https://geoservices.ign.fr/bdortho
  18. ——, “French national forest inventory, annual campaigns 2005 and following [raw data].” [Online]. Available: https://inventaire-forestier.ign.fr/dataifn/
  19. L. P. Tchapmi, C. B. Choy, I. Armeni, J. Gwak, and S. Savarese, “SEGCloud: Semantic segmentation of 3D point clouds,” Arxiv, 10 2017. [Online]. Available: http://arxiv.org/abs/1710.07563
  20. A. Boulch, J. Guerry, B. L. Saux, and N. Audebert, “SnapNet: 3d point cloud semantic labeling with 2d deep segmentation networks,” Computers and Graphics (Pergamon), vol. 71, pp. 189–198, 4 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0097849317301942
  21. E. Ahmed, A. Saint, A. E. R. Shabayek, K. Cherenkova, R. Das, G. Guseb, D. Aouada, and B. Ottersten, “A survey on deep learning advances on different 3d data representations,” Arxiv, 2019. [Online]. Available: https://doi.org/10.48550/arXiv.1808.01462
  22. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” Arxiv, 12 2016. [Online]. Available: http://arxiv.org/abs/1612.00593
  23. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” Arxiv, 6 2017. [Online]. Available: http://arxiv.org/abs/1706.02413
  24. Q. Hu, B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, and A. Markham, “RandLA-Net: Efficient semantic segmentation of large-scale point clouds,” Arxiv, 11 2019. [Online]. Available: http://arxiv.org/abs/1911.11236
  25. Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic Graph CNN for learning on point clouds,” ACM Transactions on Graphics, vol. 38, 10 2019. [Online]. Available: https://doi.org/10.48550/arXiv.1801.07829
  26. M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, “PCT: Point Cloud Transformer,” Arxiv, 12 2020. [Online]. Available: http://arxiv.org/abs/2012.09688http://dx.doi.org/10.1007/s41095-021-0229-5
  27. L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with Superpoint Graphs,” Arxiv, 11 2017. [Online]. Available: http://arxiv.org/abs/1711.09869
  28. D. Robert, H. Raguet, and L. Landrieu, “Efficient 3D semantic segmentation with Superpoint Transformer,” Arxiv, 6 2023. [Online]. Available: http://arxiv.org/abs/2306.08045
  29. H. Zhong, Z. Zhang, H. Liu, J. Wu, and W. Lin, “Individual tree species identification for complex coniferous and broad-leaved mixed forests based on deep learning combined with UAV LiDAR data and RGB images,” Forests, vol. 15, 2 2024. [Online]. Available: http://dx.doi.org/10.3390/f15020293
  30. C. Gaydon, “Myria3D: Deep learning for the semantic segmentation of aerial lidar point clouds [software],” 2022. [Online]. Available: https://github.com/IGNF/myria3d
  31. W. Falcon and T. P. team, “Pytorch lightning [python library],” 3 2019.
  32. M. Fey and J. E. Lenssen, “Fast graph representation learning with PyTorch Geometric,” 5 2019. [Online]. Available: https://github.com/pyg-team/pytorch_geometric
  33. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 12 2015. [Online]. Available: http://arxiv.org/abs/1512.03385
Citations (2)

Summary

We haven't generated a summary for this paper yet.