Award Date
May 2025
Degree Type
Thesis
Degree Name
Master of Science in Engineering (MSE)
Department
Electrical and Computer Engineering
First Committee Member
Mei Yang
Second Committee Member
Brendan Morris
Third Committee Member
Shahram Latifi
Fourth Committee Member
Shengjie Zhai
Fifth Committee Member
Fatma Nasoz
Number of Pages
95
Abstract
The integration of autonomous unmanned aerial vehicles (UAVs) with edge computing technology and deep learning (DL)-based object detection offers a groundbreaking solution for real-time wildfire detection, enabling rapid data processing directly on devices and minimizing response delays in critical scenarios. However, although showing early promise, performance is often constrained by limited training data and edge computing devices that lack graphics processing unit (GPU) acceleration. This thesis seeks to address these limitations in two stages.First, this work explores the transformative potential of Transfer Learning (TL) to enhance wildfire object detection model accuracy while also investigating TL’s impact, for DL-based object detection models, on edge computing performance metrics including inference speed, power consumption, and energy efficiency. Towards this end, we introduce the Aerial Fire and Smoke Essential (AFSE) dataset as a target dataset while utilizing the Flame and Smoke Detection Dataset (FASDD) and the general Microsoft Common Objects in Context (COCO) dataset as source datasets. By leveraging the AFSE, FASDD, COCO, and D-FIRE datasets, we also developed and tested a two-stage cascaded TL approach. The application of TL in a single stage significantly enhanced the detection accuracy of the You Only Look Once version 5 nano (YOLOv5n) model, achieving up to 79.2% mean Average Precision (mAP@0.5), while also reducing training time and increasing model generalizability across the AFSE dataset. Notably, cascaded TL showed no further improvement and TL alone did not enhance edge computing performance metrics. Secondly, this research develops a novel one-stage object detection algorithm based on the YOLOv5n architecture, optimized specifically for central processing unit (CPU)-based edge computing devices. YOLOv5n was selected for modification after demonstrating its superiority in edge computing device applications resulting from its speed and accuracy. Without hardware acceleration, an unmodified YOLOv5n model is shown to be able to inference images at nearly two-times the speed of YOLO11n, the latest in the YOLO family of object detectors. Architecture modifications include the use of MobileNetV3-Small as a backbone, Ghost Convolution modules, half the number of output channels in the neck, and the use of 3x3 kernels in the first convolution of all Bottleneck modules. After training, PyTorch weights are exported to two deployment optimized frameworks, Open Neural Network Exchange (ONNX) and Open Visual Inference and Neural Network Optimization (OpenVINO), to accelerate CPU-based inference. Compared to the original YOLOv5n, the modified model converted to OpenVINO demonstrates a 423% increase in inference speed - up to 31.9 frames per second (FPS) - along with an 11.4% reduction in power consumption on a CPU-based edge computing device. The experimental results confirm TL's role in augmenting the accuracy of early-wildfire object detectors while also illustrating that the optimized architecture developed can significantly improve detection speed, power consumption, and overall energy efficiency for CPU-based edge computing devices.
Keywords
computer vision; deep learning; edge computing; ONNX; Open VINO; You Only Look Once
Disciplines
Computer Sciences | Electrical and Computer Engineering
Degree Grantor
University of Nevada, Las Vegas
Language
English
Repository Citation
Vazquez, Giovanny, "An Edge Computing Device Optimized and Transfer Learning Enhanced Deep Learning Model for Detecting Wildfire Flame and Smoke" (2025). UNLV Theses, Dissertations, Professional Papers, and Capstones. 5347.
https://oasis.library.unlv.edu/thesesdissertations/5347
Rights
IN COPYRIGHT. For more information about this rights statement, please visit http://rightsstatements.org/vocab/InC/1.0/