Publication Date
3-1-2023
Journal
PLOS Computational Biology
DOI
10.1371/journal.pcbi.1010932
PMID
36972288
PMCID
PMC10079058
PubMedCentral® Posted Date
3-27-2023
PubMedCentral® Full Text Version
Post-print
Published Open-Access
yes
Keywords
Humans, Neural Networks, Computer, Deep Learning, Visual Perception, Machine Learning, Head
Abstract
Machine learning models have difficulty generalizing to data outside of the distribution they were trained on. In particular, vision models are usually vulnerable to adversarial attacks or common corruptions, to which the human visual system is robust. Recent studies have found that regularizing machine learning models to favor brain-like representations can improve model robustness, but it is unclear why. We hypothesize that the increased model robustness is partly due to the low spatial frequency preference inherited from the neural representation. We tested this simple hypothesis with several frequency-oriented analyses, including the design and use of hybrid images to probe model frequency sensitivity directly. We also examined many other publicly available robust models that were trained on adversarial images or with data augmentation, and found that all these robust models showed a greater preference to low spatial frequency information. We show that preprocessing by blurring can serve as a defense mechanism against both adversarial attacks and common corruptions, further confirming our hypothesis and demonstrating the utility of low spatial frequency information in robust object recognition.