SoSe 21: Implementation Project: Privacy-Preserving Machine Learning
Marian Margraf
Comments
In recent years, the topic of Machine Learning (ML) privacy has gained interest. Research has shown that there exist methods that can (partly) inverse the process of turning training data into a model. Those methods include “model inversion” (attacks in which representations of the training data are reverse-engineered from model parameters), “attribute inversion” (attacks that tempt to restore specific attributes from the training data), or “membership inference” (attacks that try to answer the question whether a particular data point was included in the model’s training data or not). Within this software project, we are going to design and implement a tool in which different types of such attacks against ML models are integrated in order to execute them on a model to evaluate its overall privacy level.
closeSuggested reading
[1] Hunt, Tyler, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. "Chiron: Privacy-preserving machine learning as a service." arXiv preprint arXiv:1803.05961 Add to Citavi project by ArXiv ID (2018)
[2] Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. "Membership inference attacks against machine learning models." In 2017 IEEE Symposium on Security and Privacy (SP), pp. 3-18. IEEE, 2017
[3] Fredrikson, Matt, Somesh Jha, and Thomas Ristenpart. "Model inversion attacks that exploit confidence information and basic countermeasures." In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322-1333, 2015
[4] https://www.scrumguides.org/download.html
close14 Class schedule
Regular appointments