A Robust Model against Adversarial Attacks for Object Detection Based on Deep Learning
In this paper, a new histogram-based method is introduced to make object detectors resistant to hostile attacks. In the following, this method was applied to two object detector models, YOLOV5 and FRCNN, and in this way, two models resistant to attacks were introduced. In order to verify the performance of the mentioned models, we performed the adversarial training process of these models with three targeted attacks TOG-vanishing, TOG-mislabeling, and TOG-fabrication and one untargeted attack, DAG. We have checked the efficiency of the introduced models on two data sets MSCOCO and PASCAL VOC, which are among the most famous data sets in the field of object recognition. The results show that this method, in addition to improving the adversarial accuracy, also improves the clean accuracy of the object detector models to some extent. The average clean accuracy of the YOLOv5-n model for the PASCAL VOC dataset, if adversarial attacks are applied to it, in the case where no defense method is applied, is 85.5%, and in the case where the histogram method is applied, the average accuracy is equal to with 87.36%. In the YOLOv5-n model, according to the results, the best adversarial accuracy of this model, which has increased compared to other models, is in TOG-vanishing and TOG-fabrication attacks, which are 48% and 52.36%, respectively.