Aliasing black box adversarial attack with joint self-attention distribution and confidence probability

dc.contributor.authorLiu, Jun
dc.contributor.authorJin, Haoyu
dc.contributor.authorXu, Guangxia
dc.contributor.authorLin, Mingwei
dc.contributor.authorWu, Tao
dc.contributor.authorPolat, Kemal
dc.date.accessioned2023-10-25T07:20:42Z
dc.date.available2023-10-25T07:20:42Z
dc.date.issued2023en_US
dc.departmentBAİBÜ, Mühendislik Fakültesi, Elektrik Elektronik Mühendisliği Bölümüen_US
dc.descriptionThis work was supported by the Chongqing Research Program of Basic Research and Frontier Technology (Grant No. cstc2021jcyj- msxmX0530 and Grant No. cstc2020jcyj-msxmX0804) , and the Tech- nology Innovation and Application Development Projects of Chongqing (Grant No. cstc2021jscx-gksbX0032, cstc2021jscx-gksbX0029) , and the Key R & D plan of Hainan Province (Grant No. ZDYF2021GXJS006) .en_US
dc.description.abstractDeep neural networks (DNNs) are vulnerable to adversarial attacks, in which a small perturbation to samples can cause misclassification. However, how to select important words for textual attack models is a big challenge. Therefore, in this paper, an innovative score-based attack model is proposed to solve the important words se-lection problem for textual attack models. To this end, the generation of semantically adversarial examples in this model is adopted to mislead a text classification model. Then, this model integrates the self-attention mechanism and confidence probabilities for the selection of the important words. Moreover, an alternative model similar to the transfer attack is introduced to reflect the correlation degree of words inside the texts. Finally, adversarial training experimental results demonstrate the superiority of the proposed model.en_US
dc.description.sponsorshipChongqing Research Program of Basic Research and Frontier Technology; Tech- nology Innovation and Application Development Projects of Chongqing; Key R & D plan of Hainan Province; [cstc2021jcyj- msxmX0530]; [cstc2020jcyj-msxmX0804]; [cstc2021jscx-gksbX0032]; [cstc2021jscx-gksbX0029]; [ZDYF2021GXJS006]en_US
dc.identifier.citationLiu, J., Jin, H., Xu, G., Lin, M., Wu, T., Nour, M., ... & Polat, K. (2023). Aliasing black box adversarial attack with joint self-attention distribution and confidence probability. Expert Systems with Applications, 214, 119110.en_US
dc.identifier.doi10.1016/j.eswa.2022.119110
dc.identifier.endpage12en_US
dc.identifier.issn0957-4174
dc.identifier.issn1873-6793
dc.identifier.scopus2-s2.0-85141261103en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage1en_US
dc.identifier.urihttp://dx.doi.org/10.1016/j.eswa.2022.119110
dc.identifier.urihttps://hdl.handle.net/20.500.12491/11791
dc.identifier.volume214en_US
dc.identifier.wosWOS:000916091700002en_US
dc.identifier.wosqualityQ1en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.institutionauthorPolat, Kemal
dc.language.isoenen_US
dc.publisherPergamon-Elsevier Science Ltden_US
dc.relation.ispartofExpert Systems with Applicationsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAdversarial Attacken_US
dc.subjectSelf-Attention Distributionen_US
dc.subjectText Classificationen_US
dc.subjectEfficienten_US
dc.titleAliasing black box adversarial attack with joint self-attention distribution and confidence probabilityen_US
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
jun-liu.pdf
Boyut:
1.35 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Tam Metin/Full Text
Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.44 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: