Secure the Weakest Link [Principle 1]

Security people are quick to point out that security is like a chain.  And just as a chain is only as strong as the weakest link, an ML system is only as secure as its weakest component.  Want to anticipate where bad guys will attack your ML system?  Well, think through which part would be easiest to attack.

ML systems are different from many other artifacts that we engineer because the data in ML are just as important (or sometimes even more important) than the learning mechanism itself.  That means we need to pay even more attention to the data used to train, test, and operate an ML system than we might in a standard system.

In some sense, this turns the idea of an attack surface on its head. To understand what we mean, consider that the training data in an ML system may often come from a public location­—that is, one that may be subject to poor data protection controls. If that’s the case, perhaps the easiest way to attack an ML system of this flavor would be through polluting or otherwise manipulating the data before it even arrives. An attacker wins if they get to the ML-critical data before the ML system even starts to learn. Who cares about the public API of the trained up and operating ML system if the data used to build it were already maliciously constructed?

Thinking about ML data as money makes a good exercise.  Where does the “money” (that is, data) in the system come from?  How is it stored?  Can counterfeit money help in an attack? Does all of the money get compressed into high value storage in one place (say the weights and thresholds learned in the ML systems’ distributed representation)?  How does money come out of an ML system?  Can money be transferred to an attacker?  How would that work?

Lets stretch this analogy ever farther. When it comes to actual money, a sort of perverse logic pervades the physical security world. There’s generally more money in a bank than a convenience store, but which one is more likely to be held up? The convenience store, because banks tend to have much stronger security precautions. Convenience stores are a much easier target. Of course the payoff for successfully robbing a convenience store is much lower than knocking off a bank, but it is probably a lot easier to get away from the convenience store crime scene. To stretch our analogy a bit, you want to look for and better defend the convenience stores in your ML system.

ML has another weird factor that is worth considering—that is that much of the source code is open and re-used all over the place.  Should you trust that algorithm that you snagged from GitHub? How does it work? Does it protect those oh so valuable data sets you built up?  What if the algorithm itself is sneakily compromised?  These are some potential weak links that may not be considered in a traditional security stance.

Identifying the weakest component of a system falls directly out of a good risk analysis. Given good risk analysis information, addressing the most serious risk first, instead of a risk that may be easiest to mitigate, is always prudent. Security resources should be doled out according to risk. Deal with one or two major problems, and move on to the remaining ones in order of severity.

Of course, this strategy can be applied forever, because 100% security is never attainable. There is a clear need for some stopping point. It is okay to stop addressing risks when all components appear to be within the thresh­old of acceptable risk. The notion of acceptability depends on the business propo­sition.

All of our analogies aside, good security practice dictates an approach that identifies and strengthens weak links until an acceptable level of risk is achieved.

Read the rest of the principles here.

BIML Security Principles

Early work in security and privacy of ML has taken an “operations security” tack focused on securing an existing ML system and maintaining its data integrity. For example, Nicolas Papernot uses Salzter and Schroeder’s famous security principles to provide an operational perspective on ML security1. In our view, this work does not go far enough into ML design to satisfy our goals. Following Papernot, we directly address Salzter and Schroeder’s security principles as adapted in the book Building Secure Software by Viega and McGraw. Our treatment is more directly tied to security engineering than to security operations.

Security Principles and Machine Learning

In security engineering it is not practical to protect against every type of possible attack. Security engineering is an exercise in risk management. One approach that works very well is to make use of a set of guiding principles when designing and building systems. Good guiding principles tend to improve the security outlook even in the face of unknown future attacks. This strategy helps to alleviate the “attack-of-the-day” problem so common in early days of software security (and also sadly common in early approaches to ML security).

In this series of blog entries we present ten principles for ML security lifted directly from Building Secure Software and adapted for ML. The goal of these principles is to identify and to highlight the most impor­tant objectives you should keep in mind when designing and building a secure ML system. Following these principles should help you avoid lots of ­common security problems. Of course, this set of principles will not be able to cover every possible new flaw lurking in the future.

Some caveats are in order. No list of principles like the one pre­sented here is ever perfect. There is no guarantee that if you follow these principles your ML system will be secure. Not only do our principles present an incomplete picture, but they also sometimes conflict with each other. As with any complex set of principles, there are often subtle tradeoffs involved.

Clearly, application of these ten principles must be sensitive to context. A mature risk management approach to ML provides the sort of data required to apply these principles intelligently.

Principle 1: Secure the Weakest Link

Principle 2: Practice Defense in Depth

Principle 3: Fail Securely

Principle 4: Follow the Principle of Least Privilege

Principle 5: Compartmentalize

Principle 6: Keep It Simple

Principle 7: Promote Privacy

Principle 8: Remember That Hiding Secrets Is Hard

Principle 9: Be Reluctant to Trust

Principle 10: Use Your Community Resources

What will follow in the next few blog entries is a treatment of each of the ten principles from an ML systems engineering perspective.

We’ll start with the first two tonight


1. N. Papernot, “A Marauder’s Map of Security and Privacy in Machine Learning,” arXiv:1811.01134, Nov. 2018. (see https://berryvilleiml.com/references/ for more)

BIML art

The exceptionally tasteful BIML logo was designed by Jackie McGraw. The logo incorporates both a yin/yang concept (huh, wonder where that comes from?) and a glyph that incorporates a B, and M, and an L in a clever way.

Here is the glyph:

The BIML glyph

Here is my personal logo (seen all over, but most famously on the cover of Software Security:

Gary McGraw’s logo (as seen on the cover of Software Security among other places)

Here is the combined glyph plus yin/yang which makes up the official BIML logo.

Last, but not least, there is the “bonus” cow, which secretly includes a picture of Clarke county in its spots. Clarke county is where metropolitan Berryville is situated in Virginia.

BIML is Born

Welcome to the BIML blog where we will (informally) write about MLsec, otherwise known as Machine Learning security. BIML is short for the Berryville Institute of Machine Learning. For what it’s worth, we think it is pretty amusing to have a “Berryville Institute” just like Santa Fe has the “Santa Fe Institute.” You go, Berryville!

BIML was born when I retired from my job of 24 years in January 2019. Many years ago as a graduate student at Indiana University, I did lots of work in machine learning and AI as part of my Ph.D. program in Cognitive Science. As a student of Doug Hofstadter’s I was deeply interested in emergent computation, sub-symbolic representation, error making, analogy, and low-level perceptual systems. I was fortunate to be surrounded by lots of fellow students interested in building things and finding out how the systems we were learning about by reading papers actually worked. Long story short, we built lots of real systems and published a bunch of papers about what we learned in the scientific literature.

The official BIML logo, designed by Jackie McGraw

Our mission at BIML is to explore the security implications built into ML systems. We’re starting with neural networks, which are all the rage at the moment, but we intend to think and write about genetic algorithms, sparse distributed memory, and other ML systems. Just to make this perfectly clear, we’re not really thinking much about using ML for security, rather we are focused on the security of ML systems themselves.

Fast forward 24 years. As one of the fathers of software security and security engineering at the design level, I have been professionally interested in how systems break, what kinds of risks are inherent in system design, and how to design systems that are more secure. At BIML we are applying these techniques and approaches directly to ML systems.

Through a series of small world phenomenon, the BIML group coalesced, sparked first when I met Harold Figueroa at an Ntrepid Technical Advisory Board meeting in the Fall of 2018 (I am the Chair of Ntrepid’s TAB, and Harold leads Ntrepid’s machine learning research group). Harold and I had a great initial discussion over dinner about representation, ML progress, and other issues. We decided that continuing those discussions and digging into some research was in order. Victor Shepardson, who did lots of ML work at Dartmouth as a Masters student, was present for our first meeting in January. We quickly added Richie Bonett, a Berryville local like me (!!) and a Ph.D. student at William and Mary, to the group. And BIML was officially born.

We started with a systematic and in depth review of the MLsec literature. You can see the results of our work in the annotated bibliography that we will continue to curate as we read and discuss papers.

Our second task was to develop an Attack Taxonomy that makes sense at a meta-level of the burgeoning ML attack literature. These days, lots of energy is being expended to attack certain ML systems. Some of the attacks are quite famous (stop sign recognition, and seeing cats as tanks both come to mind), and the popular press has made much of both ML progress and amusing attacks against ML systems. You can review the (ongoing) Attack Taxonomy work elsewhere on our website.

We’re now in the midst of an Architectural Risk Analysis (ARA) of a generic ML system. Our approach follows the ARA process introduced in my book Software Security and applied professionally for many years at Cigital. We plan to publish our work here as we make progress.

We’re really having fun with this group, and we hope you will get as much of a kick out of our results as we’re getting. We welcome contact and collaboration. Please let us know what you think.