forked from s7e11ar/secML.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
430 lines (244 loc) · 21.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<title>Class 5: Adversarial Machine Learning in Non-Image Domains · secML</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="HandheldFriendly" content="True">
<meta name="MobileOptimized" content="320">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="" />
<meta name="keywords" content="">
<meta property="og:title" content="Class 5: Adversarial Machine Learning in Non-Image Domains · secML ">
<meta property="og:site_name" content="secML"/>
<meta property="og:url" content="https://secml.github.io/class5/" />
<meta property="og:locale" content="en-us">
<meta property="og:type" content="article" />
<meta property="og:description" content=""/>
<meta property="og:article:published_time" content="2018-02-23T00:00:00Z" />
<meta property="og:article:modified_time" content="2018-02-23T00:00:00Z" />
<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@" />
<meta name="twitter:creator" content="@" />
<meta name="twitter:title" content="Class 5: Adversarial Machine Learning in Non-Image Domains" />
<meta name="twitter:description" content="" />
<meta name="twitter:url" content="https://secml.github.io/class5/" />
<meta name="twitter:domain" content="https://secml.github.io">
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "Article",
"headline": "Class 5: Adversarial Machine Learning in Non-Image Domains",
"author": {
"@type": "Person",
"name": "http://profiles.google.com/+?rel=author"
},
"datePublished": "2018-02-23",
"description": "",
"wordCount": 2079
}
</script>
<link rel="canonical" href="https://secml.github.io/class5/" />
<link rel="apple-touch-icon-precomposed" sizes="144x144" href="https://secml.github.io/touch-icon-144-precomposed.png">
<link href="https://secml.github.io/favicon.png" rel="icon">
<meta name="generator" content="Hugo 0.17" />
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
<link href='https://fonts.googleapis.com/css?family=Merriweather:300%7CRaleway%7COpen+Sans' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="https://secml.github.io/css/font-awesome.min.css">
<link rel="stylesheet" href="https://secml.github.io/css/style.css">
<script src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML'></script>
</head>
<body>
<main id="main-wrapper" class="container main_wrapper has-sidebar">
<header id="main-header" class="container main_header">
<div class="container brand">
<div class="container topline">
<a href="/">
Security and Privacy of Machine Learning
</a>
</div>
</div>
<nav class="container nav primary no-print">
<a href="https://secml.github.io/syllabus">Syllabus</a>
<a href="https://secml.github.io/schedule">Schedule</a>
<a href="https://secml.github.io/teams">Teams</a>
<a href="https://secml.github.io/topics">Topics</a>
<a href="https://secml.github.io/post" title="Show list of posts">Posts</a>
</nav>
<div class="container nav secondary no-print">
</div>
</header>
<article id="main-content" class="container main_content single">
<header class="container hat">
<h1><center>Class 5: Adversarial Machine Learning in Non-Image Domains
</center></h1>
<div class="metas">
<time datetime="2018-02-23">23 Feb, 2018</time>
· by Team Nematode
<br>
</div>
</header>
<div class="container content">
<h2 id="beyond-images">Beyond Images</h2>
<p>While the bulk of adversarial machine learning work has focused on image classification, ML is being used for a vartiety of tasks in the real world and attacks (and defenses) for different domains need to be tailored to the purpose of the ML process.
Among the most significant uses of ML are natural language processing and voice recognition. Within these fields, attacks look very different from those on images. With images, individual pixels can be changed large amounts without making the image unrecognizable, whereas in voice recognition, the audio needs to remain smooth, but many transformations can be done that are not noticable to humans.
These attacks are becoming more and more significant as always active listening devices like Amazon Alexa are becoming more common in the public sector. A broadcast television attack could be used to access personal accounts and devices of thousands of people simultaneously without many of those being attacked even noticing.</p>
<h2 id="adversarial-audio">Adversarial Audio</h2>
<blockquote>
<p>Nicholas Carlini and David Wagner. <em>Audio Adversarial Examples: Targeted Attacks on Speech-to-Text</em>. University of California, Berkeley. 5 January 2018. [<a href="https://arxiv.org/pdf/1801.01944.pdf">PDF</a>]</p>
</blockquote>
<p>If classic science fiction authors, painting grandiose visions of artificial intelligence in the future, could read some of the tech headlines of today, they might be surprised by how their predictions panned out:</p>
<p><img src="/images/class5/headlines.png" alt="" title="Google Home and Alexa in the news" />
<div class="caption">
Source: <a href="(https://google.com)"><em>Slides by Team Bus</em></a>
</div></p>
<p>These news articles recount recent incidents in which smart home voice-controlled devices have been activated by voices in television advertisements and shows, rather than from a physically-present human voice. While these untargeted, accidental (we assume) occurrences perhaps make for amusing news, they reflect a significant potential vulnerability in speech-to-text systems against targeted, adversarial attacks.</p>
<p>In their 2018 paper, authors <a href="https://arxiv.org/pdf/1801.01944.pdf">Carlini and Wagner</a> develop and demonstrate a white-box attack against automated speech recognition systems: “given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase of choice”. That is, a virtually unaltered sample, which might sound, at worst, noisy or mildly distorted to human ears, will be translated as an entirely different set of characters by the system.</p>
<p><img src="/images/class5/waveform_illustration.png" alt="" title="Illustration of the attack: adding small perturbations causes an audio waveform to transcribe to any desired target phrase." />
<div class="caption">
Source: <a href="(https://arxiv.org/pdf/1801.01944.pdf)"><em>Audio Adversarial Examples</em></a> [1]
</div></p>
<h4 id="audio-distortion">Audio distortion</h4>
<p>The goal of this attack is twofold: (a) to add the least possible amount of distortion to the original audio, and (b) to cause the distorted audio to map to an arbitrary sequence of transcribed characters. This thus becomes an optimization problem.</p>
<p>To measure distortion, we use \(dB(v)\) to represent the loudness of a given sample \(v\) measured in decibels. First, let the change between the original and distorted samples be</p>
<p>$$dB_x(\delta) := dB(\delta) - dB(x)$$</p>
<p>where \(x\) is original and \(\delta\) distorted. Then, let</p>
<p>$$C(v) = \mathop{\arg\,\max}\limits_p Pr[p | f(v)]$$</p>
<p>be the most likely transcribed phrase given some sample \(v\).</p>
<p>Our optimization problem becomes thus:</p>
<p>$$\text{minimize } dB_x(\delta)$$
$$\text{such that } C(x+\delta)=t$$
$$\text{with } x+\delta \in [-M,M]$$</p>
<p>where \(t\) is the target transcribed phrase and \(M\) is the maximum representable value (no distorted samples that are out of range – in the authors’ case, this happens to be \(2^{15}\) dB)).</p>
<p>However, due to the nonlinearity of \(C(\cdot)\), gradient descent fails, so instead we minimize</p>
<p>$$dB_x(\delta) + C(\cdot) \cdot \ell(x+\delta,t)$$
$$\text{with } \ell(x’,t) = \text{CTC-Loss}(x’,t)$$</p>
<p>where the \(\text{CTC-Loss}\) (Connectionist Temporal Classification Loss) function is just the negative log-likelihood function.</p>
<p>After solving this optimization problem, the authors constructed targeted examples on the first 100 test instances of Mozilla’s Common Voice dataset, attempting to distort 10 of these to achieve 10 target incorrect transcriptions – with 100% success for source-target pairs, and a mean perturbation magnititude of -31 dB.</p>
<p>Even after changing the loss function to</p>
<p>$$\ell(x,pi) = \sum_i \max (f(x)^i_{\pi^i}-\max_{t’\neq \pi^i} f(x)^i_{t’},0) $$</p>
<p>which avoids reducing loss and forces it to be more strongly classified, the mean distortion was only reduced from -31 dB to -38 dB, still a barely-noticeable to unnoticeable change.</p>
<h4 id="attack-sources-and-targets">Attack Sources and Targets</h4>
<p>Attack models worked consistently regardless of context, meaning that an attack using white noise or non-speech is as easy to craft as one using speech. Targeting silence turned out to be even easier than targeting speech.</p>
<h4 id="robustness">Robustness</h4>
<p>Random pointwise noise of less than -30dB is sufficient to corrupt an attack; although stronger attacks can be crafted using larger dB changes. The attacks did manage to be MP3 resistent, which speaks to the minimization of loss in MP3 compression as well as the fidelity of the attack model.</p>
<p>These attacks do become easier with longer sources, as the pacing of the target speech can be varied more, giving a larger attack space in which to work. In an ideal case for an attacker, they would be working with a prerecorded broadcast, which gives them a large continuous space to attack.
That said, Over-The-Air attacks were not viable due to the white noise and variation in speakers and the rooms. Hidden voice commands did, however, show some promise for broadcast attacks.</p>
<p>Transferability was shown to hold, as it is somewhat fundamental to ML systems. As of yet, defence techniques on the ML layers have not yet been developed, although random noise added post-attack is a viable defence.</p>
<h2 id="natural-language-processing">Natural Language Processing</h2>
<blockquote>
<p>Suranjana Samanta and Sameep Mehta. <a href="https://arxiv.org/pdf/1707.02812.pdf">Towards Crafting Text Adversarial Samples</a>. 2017.</p>
</blockquote>
<p>Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. Although most of the prior works have been focused on synthesizing adversarial samples in the image domain, NLP based text classifiers can also be the focal point of such exploit. This paper introduces a method of crafting adversarial text samples by modification of the original samples. To be precise, the authors propose to modify the original text samples by deleting or replacing the important words in the text or by introducing new words in the text sample. While crafting adversarial samples, the paper focuses on generating meaningful sentences which can pass off as legitimate from language viewpoint.</p>
<h3 id="approach">Approach</h3>
<p>Initially, the authors calculate contribution of each word towards determining the class-label. A word in the text is highly contributing if its removal from the text is going to change the class probability value to a large extent. In their proposed approach, modification of the sample text by considering each word at a time, in the order of the ranking based on the class-contribution factor. For the modification purpose, a candidate pool for each word in sample text is created considering synonyms and typos of each of the words as well as the genre or sub-category specific keywords. The assumption behind choosing sub-category or genre-specific keywords based on the fact that certain words may contribute to positive sentiment for a particular genre but can emphasis negative sentiment for other kind of genre. For example, we can consider the sentence, “The movie was hilarious”. This indicates a positive sentiment for a comedy movie. But the same sentence denotes a negative sentiment for a horror movie. Thus, the word ‘hilarious’ contributes to the sentiment of the review based on the genre of the movie. Finally, replacement, addition or removal of words are performed on a given text sample at each iteration, so that the modified sample flips its class label.</p>
<h3 id="experiments">Experiments</h3>
<p>The paper performs their experimental results on two datasets: IMDB movie review dataset for sentiment analysis and twitter dataset for gender classification. They compare the efficiency of their method with the existing method TextFool by measuring the accuracy of the model
obtained at different configurations. For both the cases, the proposed model was generated using Convolutional Neural Network (CNN).</p>
<p>The figure below (taken from the paper) shows the results for IMDB dataset. From the figure, it is obvious that the proposed method of adversarial sample crafting for text is capable of synthesizing semantically correct adversarial text samples from the original text sample. In addition, the inclusion of genre specific keywords appears to increase the quality of sample crafting. This is evident from the fact the drop in accuracy of the classifier before re-training for original text sample and the adversarialy crafted text sample is more when genre specific keywords are being used.
<p align="center">
<img src="/images/result_imdb.PNG" width="500" >
<br> <b>Figure:</b> Performance results on IMDB movie review dataset.
</p></p>
<p>A significant factor for evaluating the effectiveness of adversarial samples is to measure the semantic similarity between the original samples and their corresponding tainted counterparts. A lower similarity score denotes that the semantic meaning of the original and the modified samples are quite different, which is not acceptable from the language viewpoint. The authors found the average semantic similarity between the original text sample and their adversarial counterparts (for test set only) as 0.9164 and 0.9732 with and without using the genre specific keywords respectively. Although the semantic similarity between the original and their corresponding perturbed samples does decrease a bit while the genre specific keywords in the candidate pool, the number of valid adversarial samples generated increases as it can be seen from the percentage of the perturbed samples for genre specific keywords in the above figure.</p>
<p>Another key component for the evaluation of adversarial samples is to measure the number of changes incurred to obtain the adversarial sample. The number of changes made to craft a successful adversarial sample should be ideally low.The following figure shows a graph that indicates the number of changes required to create successful adversarial samples with and without using the genre-specific keywords. From the figure, it is evident that, if we consider same number of modifications the number of tainted sample produced using genre specific keywords for creating adversarial samples is more than that when genre specific keywords are not used.
<p align="center">
<img src="/images/result_modifications.PNG" width="500" >
<br> <b>Figure:</b> Plot showing the number of adversarial samples produced against the number of changes incurred
</p></p>
<p>For the Twitter dataset, the proposed method shows similar performance as the IMDB dataset.
<p align="center">
<img src="/images/result_twitter.PNG" width="500" >
<br> <b>Figure:</b> Performance results on Twitter gender classification dataset.
</p></p>
<h3 id="example">Example</h3>
<p align="center">
<img src="/images/example_nlp.PNG" width="500" >
<br> <b>Figure:</b> Examples of adversarial samples crafted from Twitter and IMDB dataset using (i)TextFool and
(ii) proposed method.
</p>
<h4 id="face-recognition">Face Recognition</h4>
<blockquote>
<p>Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter. <a href="https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf"><em>Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition</em></a>. ACM CCS 2016.</p>
</blockquote>
<p>Face recognition and face detection algorithms are extensively used in surveillance and access control applications. This paper focuses on fooling the state-of-art machine learning algorithms that are practically deployed for these tasks. More concretely, the paper carries out two types of attacks: dodging and impersonation. In dodging, the attacker tries to conceal his/her identity, whereas, in impersonation, the attacker tries to trick the algorithm into recognizing him/her as a different target individual. The authors realize these attacks by printing a wearable glass which, upon wearing, allows the attacker to successfully launch the attacks. The glasses used in the attack are shown below.</p>
<p align="center">
<img src="/images/class5/glasses.png" width="500" >
<br> <b>Figure:</b> Glass frames used for the attacks
</p>
<p><a href="https://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/parkhi15.pdf">Parikh et al.</a> proposed the state-of-art 39 layer deep neural network trained on 2,622 celebrities which achieved an accuracy of 98.95% for the task of face detection. The authors of this paper use this model to launch their attacks.</p>
<p>The objective function of the deep neural network’s softmax layer is given as below:
<p align="center">
<img src="/images/class5/objective.png" width="500" >
<br>
</p></p>
<p>For impersonation, the objective is to add the minimum amount of noise \(r\) in the input image \(x\) to convert the class lable to the target label \(c_t\), as given below:
<p align="center">
<img src="/images/class5/impersonation_obj.png" width="450" >
<br>
</p></p>
<p>For dodging, the objective is to add minimum amount of noise \(r\) in the input image \(x\) to deviate from the correct class label \(c_x\).
<p align="center">
<img src="/images/class5/dodging_obj.png" width="500" >
<br>
</p></p>
<p>The noises are then carefully added to the 3D printed glasses.</p>
<h4 id="experiments-1">Experiments</h4>
<p>The paper launches successful dodging and impersonation attacks. Figure below shows an example of impersonation where a female celebrity (left) is classified as a male celebrity (right) by the algorithm when the glass frame is used (center).
<p align="center">
<img src="/images/class5/example1.png" width="500" >
<br> <b>Figure:</b> Successful impersonation using the glasses
</p></p>
<p>Figure below shows an example of dodging the face detection algorithm with minor changes to the image that donot affect a normal human judgement. Left image is the original image and the center and right images are the perturbed images where the algorithm does not detect the faces.
<p align="center">
<img src="/images/class5/example2.png" width="500" >
<br> <b>Figure:</b> Dodging the face detection
</p></p>
<p>Further results show 100% dodging success rate and very high impersonation success rate.</p>
<h4 id="summary">Summary</h4>
<p>While the authors perform white-box attacks on the state-of-the-art face detection and face recognition algorithms, they also discuss about successfully carrying out the black-box attacks.</p>
<p>— Team Nematode: <br />
Bargav Jayaraman, Guy “Jack” Verrier, Joshua Holtzman, Max Naylor, Nan Yang, Tanmoy Sen</p>
<h2 id="references">References</h2>
<p><a href="https://arxiv.org/pdf/1801.01944.pdf">[1]</a> Nicholas Carlini and David Wagner, “Audio Adversarial Examples: Targeted Attacks on Speech-to-Text.” January 2018.</p>
<p><a href="https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf">[2]</a> Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter, “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition.” October 2016.</p>
</div>
<footer class="container">
<div class="container navigation no-print">
<h2>Navigation</h2>
<a class="prev" href="https://secml.github.io/class4/" title="Class 4: Differential Privacy In Action">
Previous
</a>
<a class="next" href="https://secml.github.io/class6/" title="Class 6: Measuring Robustness of ML Models">
Next
</a>
</div>
</footer>
</article>
<footer id="main-footer" class="container main_footer">
<div class="container nav foot no-print">
<a class="toplink" href="#">back to top</a>
</div>
<div class="container credits">
<div class="container footline">
<p align="center">cs6501: secML | University of Virginia, Spring 2018 | <a href="https://www.cs.virginia.edu/evans">David Evans</a>
</div>
</div>
</footer>
</main>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-3775212-1', 'auto');
ga('send', 'pageview');
</script>
</body>
</html>