Presentation + Paper
23 April 2021 Deepfaking it: experiments in generative, adversarial multispectral remote sensing
Author Affiliations +
Abstract
In this work we utilize generative adversarial networks (GANs) to synthesize realistic transformations for remote sensing imagery in the multispectral domain. Despite the apparent perceptual realism of the transformed images at a first glance, we show that a deep learning classifier can very easily be trained to differentiate between real and GAN-generated images, likely due to subtle but pervasive artifacts introduced by the GAN during the synthesis process. We also show that a very low-amplitude adversarial attack can easily fool the aforementioned deep learning classifier, although these types of attacks can be partially mitigated via adversarial training. Finally, we explore the features utilized by the classifier to differentiate real images from GAN-generated ones, and how adversarial training causes the classifier to focus on different, lower-frequency features.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Christopher X. Ren, Amanda Ziemann, James Theiler, and Juston Moore "Deepfaking it: experiments in generative, adversarial multispectral remote sensing", Proc. SPIE 11727, Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imaging XXVII, 117270M (23 April 2021); https://doi.org/10.1117/12.2587753
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Remote sensing

Video

Data modeling

Gallium nitride

Motion models

Neural networks

Satellite imaging

Back to Top