BIML Results

BIML in the Barn Video Series

An Architectural Risk Analysis of Machine Learning Systems

At BIML, we are interested in “building security in” to machine learning (ML) systems from a security engineering perspective. This means understanding how ML systems are designed for security, teasing out possible security engineering risks, and making such risks explicit. We are also interested in the impact of including an ML system as part of a larger design. Our basic motivating question is: how do we secure ML systems proactively while we are designing and building them? This architectural risk analysis (ARA) is an important first step in our mission to help engineers and researchers secure ML systems.

We present a basic ARA of a generic ML system, guided by an understanding of standard ML system components and their interactions.

Download the full document here

Interactive risk framework


IEEE Computer Article

An introduction to BIML published in IEEE Computer, volume 52, number 8, pages 54-57, “Security Engineering for Machine Learning


A Taxonomy of ML Attacks

Our taxonomy considers attacks on ML algorithms, as opposed to peripheral attacks or attacks on ML infrastructure (i.e., software frameworks or hardware accelerators). See the taxonomy.


Annotated Bibliography

Our work began with a review of existing scientific literature in MLsec. Each of our meetings includes discussion of a few new papers. After we read a paper, we add it to our annotated bibliography, which also includes a top 5 list. Have a look.


Towards an Architectural Risk Analysis of ML

We are interested in “building security in” to ML systems from a security engineering perspective. This means understanding how ML systems are designed for security (including what representations they use), teasing out possible engineering tradeoffs, and making such tradeoffs explicit. We are also interested in the impact of including an ML system as a component in a larger design.  Our basic motivating question is: how do we secure ML systems proactively while we are designing and building them?

Early work in security and privacy in ML has taken an “operations security” tack focused on securing an existing ML system and maintaining its data integrity. For example, Nicolas Papernot uses Salzter and Schroeder’s famous security principals to provide an operational perspective on ML security. In our view, this work does not go far enough into ML design to satisfy our goals.

A key goal of our work is to develop a basic architectural risk analysis (sometimes called a threat model) of a typical ML system. Our analysis will take into account common design flaws such as those described by the IEEE Center for Secure Design.

A complete architectural risk analysis for a generic ML system is underway and will be available soon.