ISSN : 1229-3431(Print)
ISSN : 2287-3341(Online)
ISSN : 2287-3341(Online)
Journal of the Korean Society of Marine Environment and Safety Vol.30 No.7 pp.717-727
DOI : https://doi.org/10.7837/kosomes.2024.30.7.717
DOI : https://doi.org/10.7837/kosomes.2024.30.7.717
Research on the Development and Measurement Methods of Deep Learning-Based Marine Life Detection Technology
Abstract
This study focuses on comparing the performance of YOLO(You Only Look Once)-segmentation-based marine life detection models and developing a deep learning model for correcting color distortion in underwater images. The detection models were constructed using instance segmentation models YOLOv5-Seg, YOLOv8-Seg, YOLOv9-Seg, and YOLOv11-Seg, officially provided by Ultralytics. The models were trained on an identical dataset of 22 marine species to ensure consistency across versions. The results demonstrated that YOLOv9c-Seg achieved the highest performance with a precision of 0.908, recall of 0.912, and mAP@50 of 0.943, making it the optimal model for marine life detection. To address color distortion in underwater environments and improve detection accuracy, a PhysicalNN-based image correction model was developed, incorporating RGB transformation techniques such as CLAHE, White Balance, and Image Filtering. Using the selected detection and image correction models, we accurately identified the locations of marine organisms within underwater footage. Additionally, employing a Monocular Depth Estimation (MDE) algorithm and a guide stick as a reference point, we estimated the distance and size of detected organisms. This research highlights the potential of indirectly estimating the size (10.0–35.0 cm) and weight of marine life in a 3D space using single-camera footage, offering practical implications for future marine ecosystem monitoring.