Keynote Address: PETs, POTs, and Pitfalls: Rethinking the Protection of Users against Machine Learning presented at ScAINet'19 2019

by Carmela Troncoso,

Summary : In a machine-learning dominated world, users' digital interactions are monitored, and scrutinized in order to enhance services. These enhancements, however, may not always have the benefit and preferences of the users as a primary goal. Machine learning, for instance, can be used to learn users' demographics and interests in order to fuel targeted advertisements, regardless of people's privacy rights; or to learn bank customers' behavioral patterns to optimize the monetary benefits of loans, with disregard for discrimination. In other words, machine learning models may be adversarial in their goals and operation. Therefore, adversarial machine learning techniques that are usually considered undesirable can be turned into robust protection mechanisms for users. In this talk we discuss two protective uses of adversarial machine learning, and challenges for protection arising from the biases implicit in many machine learning models.