CycleGAN, a Master of Steganography

arXiv:1712.02950v2 [cs.CV] 16 Dec 2017

Casey Chu Stanford University [email protected]

Andrey Zhmoginov Google Inc. [email protected]

Mark Sandler Google Inc. [email protected]

Abstract CycleGAN [Zhu et al., 2017] is one recent successful approach to learn a transformation between two image distributions. In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to “hide” information about a source image into the images it generates in a nearly imperceptible, highfrequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic. We connect this phenomenon with adversarial attacks by viewing CycleGAN’s training procedure as training a generator of adversarial examples and demonstrate that the cyclic consistency loss causes CycleGAN to be especially vulnerable to adversarial attacks.

1

Introduction

Image-to-image translation is the task of taking an image from one class of images and rendering it in the style of another class. One famous example is artistic style transfer, pioneered by Gatys et al. [2015], which is the task of rendering a photograph in the style of a famous painter. One recent technique for image-to-image translation is CycleGAN [Zhu et al., 2017]. It is particularly powerful because it requires only unpaired examples from two image domains X and Y . CycleGAN works by training two transformations F : X → Y and G : Y → X in parallel, with the goal of satisfying the following two conditions: 1. F x ∼ p(y) for x ∼ p(x), and Gy ∼ p(x) for y ∼ p(y); 2. GF x = x for all x ∈ X, and F Gy = y for all y ∈ Y , where p(x) and p(y) describe the distributions of two domains of images X and Y . The first condition ensures that the generated images appear to come from the desired domains and is enforced by training two discriminators on X and Y respectively. The second condition ensures that the information about a source image is encoded in the generated image and is enforced by a cyclic consistency loss of the form ||GF x − x|| + ||F Gy − y||. The hope is that the information about the source image x is encoded semantically into elements of the generated image F x. For our experiments, we trained a CycleGAN model on a “maps” dataset consisting of approximately 1,000 aerial photographs X and 1,000 maps Y . The model trained for 500 epochs produced two maps F : X → Y and G : Y → X that generated realistic samples from these image domains.

2

Hidden Information

We begin with a curious observation, illustrated in Figure 1. We first take an aerial photograph x that was unseen by the network at training time. Since the network was trained to minimize the cyclic consistency loss, one would expect that x ≈ GF x, and indeed the two images turn out to be nearly identical. However, upon closer inspection, it becomes apparent that there are many details present 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.

(a) Aerial photograph: x.

(b) Generated map: F x.

(c) Aerial reconstruction: GF x.

Figure 1: Details in x are reconstructed in GF x, despite not appearing in the intermediate map F x.

(a) Generated map.

(b) Training map, for comparison.

Figure 2: Maps with details amplified by adaptive histogram equalization. Information is present in the generated map even in regions that appear empty to the naked eye. in both the original aerial photograph x and the aerial reconstruction GF x that are not visible in the intermediate map F x. For example, the pattern of black dots on the white roof in x is perfectly reconstructed, even though in the map, that area appears solidly gray. How does the network know how to reconstruct the dots so precisely? We observe this phenomenon with nearly every aerial photograph passed into the network, as well as when CycleGAN is trained on datasets other than maps. We claim that CycleGAN is learning an encoding scheme in which it “hides” information about the aerial photograph x within the generated map F x. This strategy is not as surprising as it seems at first glance, since it is impossible for a CycleGAN model to learn a perfect one-to-one correspondence between aerial photographs and maps, when a single map can correspond to a vast number of aerial photos, differing for example in rooftop color or tree location. It may be possible to directly see where CycleGAN may be encoding this hidden information. When we zoom into an apparently solid region of the generated map, we in fact find a surprising amount of variation. We amplify this variation using an image processing technique called adaptive histogram equalization, which enhances contrast in a local neighborhood of each pixel, and present the results in Figure 2. For comparison, we apply the same transformation to a ground truth training map. We see that there does appear to be an extra, high-frequency signal in the generated map. We investigate the nature of this encoding scheme in the following sections.

3

Sensitivity to Corruption

In this section, we corrupt the generated map with noise and study how the aerial reconstruction changes, in order to characterize the nature of the encoding scheme. Specifically, let us define V ≡ Ex∼p(x),z∼p(z) ||G(F x + z) − GF x||1 , 2

(1)

V

V

0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 0.00

0.01

0.02

²

0.03

0.04

0.05

(a) V at σ = 0 (spatially independent noise).

0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00

0

2

4

6

8

σ

10

12

14

16

(b) V at  = 0.01.

Figure 3: Sensitivity of G to noise as the amplitude  and spatial correlation σ of the noise varies. In (a), V =  is plotted for reference.

(a) Original image: x.

(b) Edited image: x0 .

(c) Generated map: F x0 . (d) Difference: F x0 −F x.

Figure 4: The information encoding is surprisingly non-local.

where p(x) is the true aerial image distribution and p(z) is a noise distribution. V measures how different the aerial reconstruction is when noise is added, and we are interested in how V depends on the noise distribution p(z). In our experiments, we chose p(z) to be Gaussian noise and varied the standard deviation  and spatial correlation σ. Figure 3 depicts how V behaves as a function of  and σ, where the expectation is approximated by averaging over 50 aerial photographs. We found that V attains nearly its maximum value as soon as  ≥ 3/256 ≈ 0.01, which corresponds to only 3 levels when the image is quantized by 8-bit integers. Thus an imperceptible modification of the map image can lead to major changes in the reconstructed aerial photograph. In fact, we found that simply encoding the generated map F x with lossy JPEG compression destroyed the reconstruction. We also found that V quickly decays to its minimum value as soon as σ ≥ 2, indicating that the information is fairly robust to low-frequency content – including what we perceive as the map itself. This suggests that the majority of information about the source photograph is stored in a high-frequency, low-amplitude signal within the generated map. While G is quite sensitive to noise added to a map F x, we show that G is well-behaved when a perturbation ∆ created by F is added to F x. Towards this end, we manually create two aerial images x0 and x00 by editing a tree onto a grass field in x in two different locations; we then study the differences in the generated map ∆0 = F x0 − F x and ∆00 = F x00 − F x, depicted in Figure 4.1 We find that the reconstruction G(F x + ∆0 + ∆00 ) contains both trees added in x0 and x00 and does not contain any unexpected artifacts. This may indicate that the encodings ∆0 and ∆00 are small enough that they operate in the linear regime of G, where G(F x + ∆0 + ∆00 ) = GF x + dG∆0 + dG∆00 + O(∆2 ),

(2)

so that addition of perturbations in the map corresponds to independent addition of features in the generated aerial image. We confirmed numerically that G(F x + ε∆0 ) and G(F x + ε∆00 ) are approximately linear with respect to ε. Finally, we note that if ∆0 is added to an entirely different Interestingly, the change ∆0 is not localized around the added tree but extends far beyond this region. However, the tree is still reconstructed when we mask out the nonlocal pixels. 1

3

(a) Source map: y0 .

(b) Crafted map: y ∗ .

(c) Difference: y ∗ − y0 . (d) Reconstruction: Gy ∗ .

Figure 5: Generation of a single target aerial photo x∗ from two arbitrary maps y0 . Note that (c) is amplified for visibility. image, the generated aerial image does not necessarily reconstruct a tree and sometimes contains artifacts.

4

Information Hiding as an Adversarial Attack

In this section, we demonstrate that G has the ability to reconstruct any desired aerial photograph x∗ from a specially crafted map y ∗ . Specifically, we solve the optimization problem y ∗ = arg min ||Gy − x∗ ||,

(3)

y

by starting gradient descent from an initial, source map y0 . This is in the same spirit as an adversarial attack [Szegedy et al., 2013] on G, where we are constructing an input y that forces G to produce a desired photograph x∗ . We present the results in Figure 5; we find that the addition of a lowamplitude signal to virtually any initial map y0 is sufficient to produce a given aerial image, and the specially crafted map y ∗ is visually indistinguishable from the original map y0 . The fact that the required perturbation is so small is not too surprising in light of Section 3, where we showed that tiny perturbations in G’s input result in large changes in its output. Recognizing that the cyclic consistency loss arg min ||GF x − x|| F,G

is similar in form to the adversarial attack objective in Equation 3, we may view the CycleGAN training procedure as continually mounting an adversarial attack on G, by optimizing a generator F to generate adversarial maps that force G to produce a desired image. Since we have demonstrated that it is possible to generate these adversarial maps using gradient descent, it is nearly certain that the training procedure is also causing F to generate these adversarial maps. As G is also being optimized, however, G may actually be seen as cooperating in this attack by learning to become increasingly susceptible to attacks. We observe that the magnitude of the difference y ∗ − y0 necessary to generate a convincing adversarial example by Equation 3 decreases as the CycleGAN model trains, indicating cooperation of G to support adversarial maps.

5

Discussion

CycleGAN is designed to find a correspondence between two probability distributions on domains X and Y . However, if X and Y are of different complexity – if their distributions have differing 4

entropy – it may be impossible to learn a one-to-one transformation between them. We demonstrated that CycleGAN sidesteps this asymmetry by hiding information about the input photograph in a low-amplitude, high-frequency signal added to the output image. By encoding information in this way, CycleGAN becomes especially vulnerable to adversarial attacks; an attacker can cause one of the learned transformations to produce an image of their choosing by perturbing any chosen source image. The ease with which adversarial examples may be generated for CycleGAN is in contrast to previous work by Tabacof et al. [2016] and Kos et al. [2017], which illustrate that the same attack on VAEs requires noticeable changes to the input image. In serious applications, the cyclic consistency loss should be modified to prevent such attacks. In a future work, we will explore one possible defense: since this particular vulnerability is caused by the cyclic consistency loss and the difference in entropy between the two domains, we investigate the possibility of artificially increasing the entropy of one of the domains by adding an additional hidden variable. For instance, if a fourth image channel is added to the map, information need not be hidden in the image but may instead be stored in this fourth channel, thus reducing the need for the learned transformations to amplify their inputs and making the possibility of attack less likely. The phenomenon also suggests one possible route for improving the quality of images generated by CycleGAN. Even though the cyclic consistency loss is intended to force the network into encoding information about a source image semantically into the generated image, the model in practice learns to “cheat” by encoding information imperceptibly, adversarially. If the network were somehow prevented from hiding information, the transformations may be forced to learn correspondences that are more semantically meaningful. More broadly, the presence of this phenomenon indicates that caution is necessary when designing loss functions that involve compositions of neural networks: such models may behave in unintuitive ways if one component takes advantage of the ability of the other component to support adversarial examples. Common frameworks such as generative adversarial networks (Goodfellow et al. [2014a]) and perceptual losses (e.g. Johnson et al. [2016]) employ these compositions; these frameworks should be carefully analyzed to make sure that adversarial examples are not an issue. Acknowledgments We thank Jascha Sohl-Dickstein for his insightful comments.

References G. K. Dziugaite, Z. Ghahramani, and D. M. Roy. A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853, 2016. L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014a. I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b. J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711. Springer, 2016. J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. arXiv preprint arXiv:1702.06832, 2017. H. Shi, J. Dong, W. Wang, Y. Qian, and X. Zhang. Ssgan: Secure steganography based on generative adversarial networks. arXiv preprint arXiv:1707.01613, 2017. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. P. Tabacof, J. Tavares, and E. Valle. Adversarial images for variational autoencoders. arXiv preprint arXiv:1612.00155, 2016. 5

D. Volkhonskiy, I. Nazarov, B. Borisenko, and E. Burnaev. Steganographic generative adversarial networks. arXiv preprint arXiv:1703.05502, 2017. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycleconsistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.

6

CycleGAN, a Master of Steganography

Dec 16, 2017 - strate an intriguing property of the model: CycleGAN learns to “hide” information about a source image into the images it generates in a nearly imperceptible, high- frequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while ...

2MB Sizes 1 Downloads 349 Views

Recommend Documents

Video steganography
Jul 11, 2003 - ing, resampling, conversion from digital to analog and back again, etc. ...... 855-884. Pitas et al., “Applying Signatures on Digital Images,” IEEE.

Video steganography
Jul 11, 2003 - Design of Hidden Transmission,” Radioengineering, vol. 4,. No. 2, Jun. .... Communications—a Tutorial RevieW—Part I,” IEEE Com munications .... [ASSIGN EITHER A 0 OR A I TO EACH CROSS-CORRELATION RESULT.

steganography Art of Covert Communications, Streetman.pdf ...
Whoops! There was a problem loading more pages. steganography Art of Covert Communications, Streetman.pdf. steganography Art of Covert Communications ...

CURRENT STATE OF STEGANOGRAPHY
Department of Computer Science ... last three years, mostly due to the war on terror. This paper discusses ... II. Terminology. Steganography is often discussed alongside watermarking, fingerprinting, covert channels, subliminal .... information on a

pdf steganography techniques
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. pdf steganography techniques. pdf steganography techniques.

Enhanced Capacity and High Security Data Steganography: A ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 8, ... of Computer Science Engineering, IES College of Technology(Bhopal), INDIA .... provides the best carrier for steganography is that which stored with ...

Enhanced Capacity and High Security Data Steganography: A ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 8, .... provides the best carrier for steganography is that which stored with ...

Steganography: Data Hiding in Images
cryptography. Cryptography concentrates more on the security or encryption of data whereas. Steganography aims to defeat the knowledge of encryption of the message. Watermarking is about protecting the content in images;. Steganography is all about c

Implementation of Audio Steganography over Two Level LSB Using ...
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg:56-63. Anil Soni, IJRIT. 56 ..... Systems Journal, Volume 39 , Issue 3-4, July 2000, pp. 547 – 568. [5] W Bender, D Gruhl, N Morimoto, A Lu, Tech

Emulsion products and imagery employing steganography
Nov 22, 2005 - AND DIGITAL SPACING OF ORIGINAL DIGITAL SIGNAL OR IMAGE ..... Bors et al., “Embedding Parametric Digital Signatures in. Images ...

Fingerprint Template Protection using Steganography
fingerprint authentication system the key element in ... fingerprint template database using steganography. In this scheme steganography is applied at two levels which increase security of the system and make .... Management(IJSPTM), Vol. 1.

Implementation of Audio Steganography over Two Level LSB ... - IJRIT
The final type of audio steganography worth mentioning is Spectrum Coding. ... D. Gruhl, R. Hwang, F. J. Paiz, S. Pogreb, “Techniques for data hiding”, IBM.

Steganography Using The Twelve Square Substitution Cipher.pdf ...
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Steganograph ... n Cipher.pdf. Steganograph ... n Cipher.pdf. Open. Extract. Open with. Sign I

MASTER OF EDUCATION (M.Ed.)
dx1( -07.11-19 600 kicci. : 74.4 A err 315171-44 t? 31:11--*f dqui. 7 fafil ua,R*Nalmci-ia, afir*Tc4- *WF ciu WI I. aTerdT. 749. 4 441 t Arkm. * fa -all. 4cirttff 1:1(1-04.

MASTER OF EDUCATION (M.Ed.)
777727 : 3 b/u2. 37fW-dr/. 917-FOT : 70%. *F. F3ft .W -t arf--4-4rd t/. 7917. 2/1f(UT WITT g-/. 1. -14--1 oci 3R9 dx1( -07.11-19 600 kicci. : 74.4 A err 315171-44 t?

M.Ed. (MASTER OF EDUCATION)
M.Ed. (MASTER OF EDUCATION). Term-End Examination. December, 2015. MES-051 : PHILOSOPHICAL AND. SOCIOLOGICAL PERSPECTIVES. Time : 3 ...

MASTER OF EDUCATION (M.Ed.)
Dec 14, 2015 - (a) Describe the Ten - Learning Spaces model and its use. (b) Explain the role of mobile - learning in the. ODL system. (c) What is the use of access devices in ODL system ? Discuss with suitable examples. (d) Discuss the characteristi

MASTER OF ARTS (ECONOMICS)
MASTER OF ARTS (ECONOMICS). Term-End Examination. December, 2014. MECE-001 : ECONOMETRIC METHODS. Time : 3 hours. Maximum Marks : 100.

M.Ed. (MASTER OF EDUCATION)
Explain the terms liberalization, Privatization and globalization. Critically examine the impact of globalization on education system in India. MES-051. 1. P.T.O. ...