Bio

I am a PhD Candidate at the University of British Columbia in the Integrated Remote Sensing Studio (IRSS) under the supervision of Dr. Nicholas Coops. My research focuses on predicting tree species with data fusion and deep learning techniques. In 2021 I graduated with a Master of Geomatics for Environmental Management (MGEM) from the University of British Columbia where I completed a capstone project looking at forest resiliency after major wildfire events in British Columbia.

Currently, I am working as a teaching assistant in the MGEM program for the course Remote Sensing for Ecosystem Management. This course provides an introduction to remote sensing and its application in mapping, monitoring, and managing forestry, vegetation, and ecosystem resources. Additionally, I serve as a technical mentor for a group of MGEM students, offering technical support and guidance for their capstone projects.


Recent Publications

M3FNet: Multi-modal multi-temporal multi-scale data fusion network for tree species composition mapping

Yuwei Cao, Nicholas C. Coops, Brent A. Murray, Ian Sinclair, Geordie Robere-McGugan

Abstract

Accurate estimation and mapping of tree species composition (TSC) is crucial for sustainable forest management. Recent advances in Light Detection and Ranging (lidar) technology and the availability of moderate spatial resolution, surface reflectance time series passive optical imagery offer scalable and efficient approaches for automated TSC estimation. In this research we develop a novel deep learning framework, M3F-Net (Multi-modal, Multi-temporal, and Multi-scale Fusion Network), that integrates multi-temporal Sentinel-2 (S2) imagery and single photon lidar (SPL) data to estimate TSC for nine common species across the 630,000-hectare Romeo Malette Forest in Ontario, Canada. A dual-level alignment strategy combines (i) superpixel-based spatial aggregation to reconcile mismatched resolutions between high-resolution SPL point clouds (>25 pts/m2) and coarser S2 imagery (20 m), and (ii) a grid-based feature alignment that transforms unordered 3D point cloud features into structured 2D representations, enabling seamless integration of spectral and structural information. Within this aligned space, a multi-level Mamba-Fusion module jointly models multi-scale spatial patterns and seasonal dynamics through selective state-space modelling, efficiently capturing long-range dependencies while filtering redundant information. The framework achieves an R2 score of 0.676, outperforming existing point cloud-based methods by 6% in TSC estimation. For leading species classification, our results are 6% better in terms of weighted F1, using either the TSC-based method or the standalone leading species classification method. Addition of seasonal S2 imagery added a 10% R2 gain compared to the SPL-only mode. These results underscore the potential of fusing multi-modal and multi-temporal data with deep learning for scalable, high-accurate TSC estimation, offering a robust tool for large-scale management applications.

Enhancing tree species composition mapping using Sentinel-2 and multi-seasonal deep learning fusion

Yuwei Cao, Nicholas C. Coops, Brent A. Murray, Ian Sinclair

Abstract

Accurate wall-to-wall mapping of tree species composition (TSC) is essential for effective forest management. However, distinguishing species-level information from satellite imagery remains a challenge due to the coarse spatial resolution of open-access satellite imagery. In this study, we present the first systematic evaluation of spatial resolution enhancement and multi-seasonal data fusion for deep learning (DL)-based TSC mapping using Sentinel-2 imagery. Specifically, we assessed: (1) the impact of different spatial resolutions and enhancement methods, comparing native 20 m Sentinel-2 imagery against bilinear resampled imagery at 10 m and 5 m, super-resolution (SR)-enhanced imagery at 10 m and their combined use; (2) the contributions of multi-seasonal imagery and auxiliary environmental data (climate, topography); and (3) the effectiveness of a novel multi-source multi-seasonal fusion (MSMSF) method for integrating seasonal and environmental datasets. Our results demonstrated substantial improvements (7% higher R2 adj) when increasing spatial resolution from 20 m to 10 m and achieved the best result (RMSE = 0.120, R2 adj = 0.731) by combining bilinear resampled 5 m and SR-enhanced 10 m datasets. Additionally, our proposed MSMSF module and multi-seasonal data outperformed the best single-season model by >5% in terms of R2 adj. These findings establish a new benchmark for DL-based TSC mapping and highlight the novelty of combining resolution enhancement with a detail-preserving fusion strategy to enable scalable, high-precision forest inventories using freely available satellite data.

Individual tree species prediction using airborne laser scanning data and derived point-cloud metrics within a dual-stream deep learning approach.

Brent A. Murray, Nicholas C. Coops, Joanne C. White, Adam Dick, Ignacio Barbeito, Ahmed Ragab

Abstract

Accurate tree species mapping is essential for effective forest management but is often constrained by manual, labour-intensive workflows that limit scalability. While airborne laser scanning (ALS) supports large-scale forest attribute prediction, species classification remains difficult in complex, multi-species forests. To address this, we propose an automated, data-driven dual-stream deep learning framework that integrates ALS data with point-cloud metrics to identify individual tree species. Our framework incorporates an automated approach to individual tree segmentation and species labelling using existing forest inventory and field data, resulting in a dataset of 16,269 labelled individual tree point-clouds of four species across a 630,000 ha boreal mixed species forest in Ontario, Canada. Our dual-stream deep learning model integrates a Point Extractor to generate feature representations from raw ALS point-clouds and a complementary Metrics Network to process the point-cloud metrics. Results, based on the split test set of 2441 trees, showed that the inclusion of the Metrics Network improved tree species classification accuracy by approximately 11 % compared to models that rely solely on the Point Extractor. A weighted F1-score of 0.70 and area under the receiver operating characteristic curve of 0.88 was achieved using this dual-stream approach, along with enhanced predictive probabilities for all species thus improving the reliability of the predicted results. This approach reduces the manual processing bottleneck of individual tree segmentation and labelling and demonstrates the value of combining raw point-clouds and point-cloud metrics into a deep learning framework, offering a scalable and operational solution for reliable species predictions.

Tree species proportion prediction using airborne laser scanning and Sentinel-2 data within a deep learning based dual-stream data fusion approach

Brent A. Murray, Nicholas C. Coops, Joanne C. White, Adam Dick, Ahmed Ragab

Abstract

The integration of airborne laser scanning (ALS) technology into forest inventory practices has significantly improved forest management by providing accurate predictions of forest structural attributes. However, ALS offers limited insight into the spectral properties of tree crowns, hindering the accurate prediction of various physiological attributes and the identification of tree species. The fusion of multitemporal spectral information with ALS data has been proposed as an important step towards addressing this limitation. While previous studies have explored combining ALS with optical data for forest species mapping, the fusion process often requires feature generation and selection, which restrict the scalability and effectiveness of these approaches. There remains a need for an approach that effectively leverages both the structural information of ALS and spectral dynamics of optical imagery in a fully data-driven manner. We propose a novel dual-stream deep learning approach that fuses ALS point-cloud data with multitemporal Sentinel-2 (S2) imagery to predict the proportions of seven species and two genera across a 630,000 ha Canadian boreal forest, capturing both structural and spectral features within 20 m grid cells. The results showed an R2 of 0.58 and an RMSE of 0.14 for all proportional values, with an 8% increase in accuracy for the detection of broadleaf species when using seasonal multispectral images, compared to using ALS data alone. Additionally, lower R2 values (0.49–0.57) were observed only when the S2 imagery was used. When identifying the leading species from the model predictions, a weighted F1 score of 0.62 and an overall accuracy of 0.65 were achieved for the seven species and two genera. This research highlights the potential of deep learning and data fusion to advance forest inventory practices by offering a scalable and reproducible method for detailed mapping of species proportions.