Analyzing Out-of-Domain Generalization Performance of Pre-Trained Segmentation Models


  •  Johnson Zhong    

Abstract

Artists illustrate objects to various degrees of complexity. As the amount of detail or the similarity to reality of a depiction decreases, the object tends to be reduced to its simplest, most relevant higher-level features (Harrison, 1981). One of the reasons Deep Neural Networks (DNN) may fail to identify objects in an image is that models are unable to recognize the order of importance of features such as shape, depth, or color within an image, which means even the most minute distortions of pixels within an image that would be imperceptible to humans would greatly impact the performance of the object detection models (Eykholt et al., 2018). However, by training DNN on artworks where the most prominent features defining specific objects are emphasized, perhaps a model can be made to be more resilient against small-scale changes in an image. In this paper, the correlation between the level of similarity to reality of images and artworks of an object and the accuracy of object detection models is investigated to test the ability of object detection models in identifying the most salient features of a particular object. The results of this report can help outline the efficacy of models only trained on real images in identifying increasingly abstract artworks that have simplified an object to its most prominent features. The experiment shows that the accuracies of models decrease as the images or illustrations provided become more abstract or simplified, which suggests the higher level features that identify a particular object are different in object detection models and humans.



This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1927-064X
  • ISSN(Online): 1927-0658
  • Started: 2012
  • Frequency: semiannual

Journal Metrics

(The data was calculated based on Google Scholar Citations)

1. Google-based Impact Factor (2021): 0.35
2. h-index (December 2021): 11
3. i10-index (December 2021): 11
4. h5-index (December 2021): N/A
5. h5-median (December 2021): N/A

Contact