Aliasing black box adversarial attack with joint self-attention distribution and confidence probability

Yükleniyor...
Küçük Resim

Tarih

2023

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Pergamon-Elsevier Science Ltd

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

Deep neural networks (DNNs) are vulnerable to adversarial attacks, in which a small perturbation to samples can cause misclassification. However, how to select important words for textual attack models is a big challenge. Therefore, in this paper, an innovative score-based attack model is proposed to solve the important words se-lection problem for textual attack models. To this end, the generation of semantically adversarial examples in this model is adopted to mislead a text classification model. Then, this model integrates the self-attention mechanism and confidence probabilities for the selection of the important words. Moreover, an alternative model similar to the transfer attack is introduced to reflect the correlation degree of words inside the texts. Finally, adversarial training experimental results demonstrate the superiority of the proposed model.

Açıklama

This work was supported by the Chongqing Research Program of Basic Research and Frontier Technology (Grant No. cstc2021jcyj- msxmX0530 and Grant No. cstc2020jcyj-msxmX0804) , and the Tech- nology Innovation and Application Development Projects of Chongqing (Grant No. cstc2021jscx-gksbX0032, cstc2021jscx-gksbX0029) , and the Key R & D plan of Hainan Province (Grant No. ZDYF2021GXJS006) .

Anahtar Kelimeler

Adversarial Attack, Self-Attention Distribution, Text Classification, Efficient

Kaynak

Expert Systems with Applications

WoS Q Değeri

Q1

Scopus Q Değeri

Q1

Cilt

214

Sayı

Künye

Liu, J., Jin, H., Xu, G., Lin, M., Wu, T., Nour, M., ... & Polat, K. (2023). Aliasing black box adversarial attack with joint self-attention distribution and confidence probability. Expert Systems with Applications, 214, 119110.