img
img
Preventing Data Poisoning Attacks By Using Generative Models      
Yazarlar (3)
Merve Aladağ
İstanbul Şehir Üniversitesi, Türkiye
Ferhat Özgür Çatak
Norges Teknisk-Naturvitenskapelige Universitet, Türkiye
Prof. Dr. Ensar GÜL Prof. Dr. Ensar GÜL
Maltepe Üniversitesi, Türkiye
Devamını Göster
Özet
At the present time, machine learning methods have been becoming popular and the usage areas of these methods have also increased with this popularity. The machine learning methods are expected to increase in the cyber security components like firewalls, antivirus software etc. Nowadays, the use of this type of machine learning methods brings with it various risks. Attackers develop different methods to manipulate different systems, not only cyber security components, but also image detection systems. Therefore, securing machine learning models has become critical. In this paper, we demonstrate a data poisoning attack towards classification method of machine learning models and we also proposed a defense algorithm which makes machine learning models more robust against data poisoning attacks. In this study, we have conducted data poisoning attacks on MNIST, a widely used character detection data set. Using the poisoned MNIST dataset, we built classification models more reliable by using a generative model such as AutoEncoder.
Anahtar Kelimeler
data poisoning | machine learning | optimization | support vector machine
Bildiri Türü Tebliğ/Bildiri
Bildiri Alt Türü Tam Metin Olarak Yayınlanan Tebliğ (Ulusal Kongre/Sempozyum)
Bildiri Niteliği Alanında Hakemli Ulusal Kongre/Sempozyum
Bildiri Dili Türkçe
Kongre Adı 1st International Informatics and Software Engineering Conference
Kongre Tarihi 06-11-2019 / 07-11-2019
Basıldığı Ülke Türkiye
Basıldığı Şehir
BM Sürdürülebilir Kalkınma Amaçları
Atıf Sayıları
SCOPUS 16
Google Scholar 29
ResearchGate 30
Preventing Data Poisoning Attacks By Using Generative Models

Paylaş