Automated Pore Detection from In-Situ FDM 3D Printing Video: A Comparative Evaluation of Modern Segmentation Models
Abstract
In extrusion-based fused deposit modeling (FDM) 3D printing, porosity weakens layer adhesion and compromises the mechanical reliability of printed parts, making it one of the most critical defects. While porosity in additive manufacturing has been studied extensively, the pixel-level segmentation of pores has received little attention. To address this gap, we collected a new dataset based on in-situ video of FDM 3D printing with biofiber-reinforced thermoplastic biopolymers. After manually annotating the frames with polygon-level pore masks, we used the dataset to benchmark four widely used segmentation models YOLOv8-seg, YOLOv11-seg, Mask R-CNN, and DeepLabV3+ under a consistent training and evaluation protocol. Our results show that YOLOv11-seg achieves the highest segmentation accuracy with a mask mAP@50 of 92.9%, while YOLOv8-seg delivers a comparable accuracy of 92.6% but with the fastest throughput at nearly 60 FPS, making it particularly suited for real-time monitoring. DeepLabV3+ and Mask R-CNN provide useful baselines but lag in either efficiency or stability. This work introduces the first annotated dataset and baseline comparison for segmentation of small, irregular defects in FDM, establishing a benchmark relevant both for additive manufacturing and for broader computer vision research on challenging low-contrast defect segmentation.