DEFENDING STRATEGIES AGAINST ADVERSARIAL ATTACKS IN RETRIEVAL SYSTEMS
- Details
- Hits: 2357
Volume 3 (1), June 2020, Pages 46-53
Suleyman Suleymanzade
Institute of Information Technology, Azerbaijan National Academy of Sciences, Baku, Azerbaijan, This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract
During history, retrieval systems become more complicated in their architecture design and work principles. The system that gathers text and visual data from the internet must classify the data and store it as the set of metadata. The modern AI classifiers that are used in retrieval systems might be tricked by skilled intruders who use adversarial attacks on the retrieval system. The goal of this paper is to review different strategies of attacks and defenses, describe state-of-the-art methods from both sides, and show how important the development of HPC is in protecting systems.
Keywords:
adversarial attacks, retrieval systems, FGSM, PGD, HPC
DOI: https://doi.org/10.32010/26166127.2020.3.1.46.53
Reference
Amdahl, G. M. (1967, April). Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the April 18-20, 1967, spring joint computer conference (pp. 483-485).
Bai, Y., & Chen, Z. (2015, November). Analysis and Exploit of Directory Traversal Vulnerability on VMware. In International Conference on Applications and Techniques in Information Security (pp. 238-244). Springer, Berlin, Heidelberg.
Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp) (pp. 39-57). IEEE.
Carlini, N., & Wagner, D. (2017, November). Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (pp. 3-14).
Chang, T. J., He, Y., & Li, P. (2018). Efficient two-step adversarial defense for deep neural networks. arXiv preprint arXiv:1810.03739.
Chen, P. Y., Sharma, Y., Zhang, H., Yi, J., & Hsieh, C. J. (2018, April). Ead: elastic-net attacks to deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificial intelligence.
Chong, S., Guttman, J., Datta, A., Myers, A., et al. (2016). Report on the NSF workshop on formal methods for security. arXiv preprint arXiv:1608.00678.
Farnia, F., Zhang, J. M., & Tse, D. (2018). Generalizable adversarial training via spectral normalization. arXiv preprint arXiv:1811.07457.
Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
Hydara, I., Sultan, A. B. M., Zulzalil, H., & Admodisastro, N. (2015). Current state of research on cross-site scripting (XSS) – A systematic literature review. Information and Software Technology, 58, 170-186.
Ismayilova, N., & Ismayilov, E. (2018) Convergence of HPC and AI: Two Directions of Connection. Azerbaijan Journal of High Performance Computing, 1(2), 179-184.
Kwon, H., Kim, Y., Park, K. W., et al. (2018). Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier. computers & security, 78, 380-397.
Liu, Y., Jing, W., & Xu, L. (2016). Parallelizing backpropagation neural network using MapReduce and cascading model. Computational intelligence and neuroscience, 2016.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
Mahjabin, T., Xiao, Y., Sun, G., & Jiang, W. (2017). A survey of distributed denial-of-service attack, prevention, and mitigation techniques. International Journal of Distributed Sensor Networks, 13(12), 1550147717741463.
Naidu, V. P. S. (2011, November). Multi-resolution image fusion by FFT. In 2011 International Conference on Image Information Processing (pp. 1-6). IEEE.
Oltramari, A., & Kott, A. (2018). Towards a reconceptualisation of cyber risk: an empirical and ontological study. Journal of Information Warfare, 17(1), 49-73.
Papernot, N., McDaniel, P., Goodfellow, I., et al. (2017, April). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506-519).
Qiu, S., Liu, Q., Zhou, S., & Wu, C. (2019). Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences, 9(5), 909.
Raff, E., Sylvester, J., Forsyth, S., & McLean, M. (2019). Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 6528-6537).
Smith, W. (2019). A Comprehensive Cybersecurity Defense Framework for Large Organizations.
Song, C., Cheng, H. P., Yang, H., Li, S., et al. (2018, July). MAT: A multi-strength adversarial training method to mitigate adversarial attacks. In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) (pp. 476-481). IEEE.
Thuraisingham, B. (1993). Multilevel security for information retrieval systems. Information & management, 24(2), 93-103.
Tramèr, F., Kurakin, A., Papernot, N., et al. (2017). Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.
Végh, J. (2019). How Amdahl’s Law limits the performance of large artificial neural networks. Brain informatics, 6(1), 4.
Voas, J., & Schaffer, K. (2016). Whatever happened to formal methods for security? Computer, 49(8), 70.
Voitovych, O. P., Yuvkovetskyi, O. S., & Kupershtein, L. M. (2016, September). SQL injection prevention system. In 2016 International Conference Radio Electronics & Info Communications (UkrMiCo) (pp. 1-4). IEEE.
Wu, F., Gazo, R., Haviarova, E., & Benes, B. (2019). Efficient Project Gradient Descent for Ensemble Adversarial Attack. arXiv preprint arXiv:1906.03333.
Xie, J., Xu, B., & Chuang, Z. (2013). Horizontal and vertical ensemble with deep representation for classification. arXiv preprint arXiv:1306.2759.
Zhang, Y., & Liang, P. (2019). Defending against whitebox adversarial attacks via randomized discretization. arXiv preprint arXiv:1903.10586.