Our research at BIML focuses on three threads: building a taxonomy of known attacks on ML, exploring a hypothesis of representation and ML risk, and performing an architectural risk analysis (sometimes called a threat model) of ML systems in general.
Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games (chess, go, Atari video games), and more. This has led to breathless popular press coverage of Artificial Intelligence, and has elevated “deep learning” to an almost magical status in the eyes of the public. Often poorly understood and partially motivated by hype, use of ML is exploding.
But ML is not magic. It is simply sophisticated associative learning technology, based on algorithms developed over the last thirty years. In fact, much of the recent progress in the field can be attributed to faster CPUs and much larger data sets rather than to any particular scientific breakthrough.
BIML is concerned by the systematic risk invoked by adopting ML in a haphazard fashion. Addressing security risk in ML is not a new idea, but most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. While we encourage these lines of inquiry, our research is focused on understanding and categorizing security engineering risks introduced by ML at the design level.