Written by Chris Anley, Chief Scientist, NCC Group
This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems. The objective is to provide some industry perspective to the academic community, while collating helpful references for security practitioners, to enable more effective security auditing and security-focused code review of ML systems. Details of specific practical attacks and common security problems are described. Some general background information on the broader subject of ML is also included, mostly for context, to ensure that explanations of attack scenarios are clear, and some notes on frameworks and development processes are provided.
This paper may be downloaded below:
Published by Jennifer Fernick
Jennifer Fernick is the Global Head of Research at NCC Group. She can be found on Twitter at @enjenneer. View all posts by Jennifer Fernick