Follow the Principle of Least Privilege [Principle 4]

The principle of least privilege states that only the minimum access necessary to perform an operation should be granted, and that access should be granted only for the minimum amount of time necessary.[i]

When you give out access to parts of a system, there is always some risk that the privileges associated with that access will be abused. For example, let’s say you are to go on vacation and you give a friend the key to your home, just to feed pets, collect mail, and so forth. Although you may trust the friend, there is always the possibility that there will be a party in your house without your consent, or that something else will happen that you don’t like. Regardless of whether you trust your friend, there’s really no need to put yourself at risk by giving more access than necessary. For example, if you don’t have pets, but only need a friend to pick up the mail on occasion, you should relinquish only the mailbox key. Although your friend may find a good way to abuse that privilege, at least you don’t have to worry about the possibility of additional abuse. If you give out the house key unnecessarily, all that changes.

Similarly, if you do get a house-sitter while you’re on vacation, you aren’t likely to let that person keep your keys when you’re not on vacation. If you do, you’re setting yourself up for additional risk. Whenever a key to your house is out of your control, there’s a risk of that key getting duplicated. If there’s a key outside your control, and you’re not home, then there’s the risk that the key is being used to enter your house. Any length of time that someone has your key and is not being supervised by you constitutes a window of time in which you are vulnerable to an attack. You want to keep such windows of vulnerability as short as possible to minimize your risks.

In an ML system, we most likely want to control access around lifecycle phases. In the training phase, the system may have access to lots of possibly sensitive training data.  Assuming an offline model (where training is not continuous), after the training phase is complete, the system should no longer require access to those data. (As we discussed when we were talking defense in depth, system engineers need to understand that in some sense all of the confidential data are now represented in the trained-up ML system and may be subject to ML-specific attacks.)

Thinking about access control in an ML is useful and can be applied through the lens of the principle of least privilege, particularly between lifecycle phases and system components. Users of an ML system are not likely to need access to training data and test data, so don’t give it to them. In fact, users may only require black box API access to a running system.  If that’s the case, then provide only what is necessary in order to preserve security.

Less is more when it comes to the principle of least privilege. Limit data exposure to those components that require it and then for as short a time period as possible.

Read the rest of the principles here.


[i]Saltzer, Jerome H., and Michael D. Schroeder. “The protection of information in computer systems.” Proceedings of the IEEE63, no. 9 (1975): 1278-1308.

Fail Securely [Principle 3]

Even under ideal conditions, complex systems are bound to fail eventually. Failure is an unavoidable state that should always be planned for. From a security perspective, failure itself isn’t the problem so much as the tendency for many systems to exhibit insecure behavior when they fail.

The best real-world example we know is one that bridges the real world and the electronic world—credit card authentication. Big credit card companies such as Visa and MasterCard spend lots of money on authentication technologies to prevent credit card fraud. Most notably, whenever you go into a store and make a purchase, the vendor swipes your card through a device that calls up the credit card company. The credit card company checks to see if the card is known to be stolen. More amazingly, the credit card company analyzes the requested purchase in context of your recent purchases and compares the patterns to the overall spending trends. If their engine senses anything suspicious, the transaction is denied. (Sometimes the trend analysis is performed off-line and the owner of a suspect card gets a call later.)

This scheme appears to be remarkably impressive from a security point of view; that is, until you note what happens when something goes wrong. What happens if the line to the credit card company is down? Is the vendor required to say, “I’m sorry, our phone line is down”? No. The credit card company still gives out manual machines that take an imprint of your card, which the vendor can send to the credit card company for reimbursement later. An attacker need only cut the phone line before ducking into a 7-11.

ML systems are particularly complicated (what with all that dependence on data) and are prone to fail in new and spectacular ways. Consider a system that is meant to classify/categorize its input.  Reporting an inaccurate classification seems like not such a bad thing to do. But in some cases, simply reporting what an ML system got wrongcan lead to a security vulnerability. As it turns out, attackers can exploit misclassification to create adversarial examples[i], or use a collection of errors en masse to ferret out sensitive and/or confidential information used to train the model.i In general, ML systems would do well toavoid transmitting low-confidence classification results to untrusted users in order to defend against these attacks, but of course that seriously constrains the usual engineering approach.

Classification results should only be provided when the system is confident they are correct. In the case of either a failure or a low confidence result, care must be taken that feedback from the model to a malicious user can’t be exploited. Note that many ML models are capable of providing confidence levels along with their output to address some of these risks. That certainly helps when it comes to understanding the classifier itself, but it doesn’t really address information exploit or leakage (both of which are more challenging problems). ML system engineers should carefully consider the sensitivity of their systems’ predictions and take into account the amount of trust they afford the user when deciding what to report.

If your ML system has to fail, make sure that it fails securely.

Read the rest of the principles here.


[i]Gilmer, Justi, Ryan P. Adams, Ian Goodfellow, David Andersen, and George E. Dahl. “Motivating the Rules of the Game for Adversarial Example Research.” arXiv preprint 1807.06732 (2018)

Practice Defense in Depth [Principle 2]

The idea behind defense in depth is to manage risk with diverse defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense hopefully prevents a full breach.

Let’s go back to our example of bank security. Why is the typical bank more secure than the typical convenience store? Because there are many redundant security measures protecting the bank, and the more measures there are, the more secure the place is.

Security cameras alone are a deterrent for some. But if people don’t care about the cameras, then a security guard is there to defend the bank physically with a gun. Two security guards provide even more protection. But if both security guards get shot by masked bandits, then at least there’s still a wall of bulletproof glass and electronically locked doors to protect the tellers from the robbers. Of course if the robbers happen to kick in the doors, or guess the code for the door, at least they can only get at the teller registers, because the bank has a vault protecting the really valuable stuff. Hopefully, the vault is protected by several locks and cannot be opened without two individuals who are rarely at the bank at the same time. And as for the teller registers, they can be protected by having dye-emitting bills stored at the bottom, for distribution during a robbery.

Of course, having all these security measures does not ensure that the bank is never successfully robbed. Bank robberies do happen, even at banks with this much security. Nonetheless, it’s pretty obvious that the sum total of all these defenses results in a far more effective security system than any one defense alone.

The defense-in-depth principle may seem somewhat contradictory to the “secure-the-weakest-link” principle because we are essentially saying that defenses taken as a whole can be stronger than the weakest link. How­ever, there is no contradiction. The principle “secure the weakest link” applies when components have security functionality that does not overlap. But when it comes to redundant security measures, it is indeed possible that the sum protection offered is far greater than the protection offered by any single component.

ML systems are constructedout ofnumerous components.  And, as we pointed out multiple times above, the data are often the most important thing from a security perspective. This means that bad actors haveasmany opportunities to exploitan ML system as there are components, and then some. Each and every component comes with a set of risks, and each and every one needs to address those risks head on.  But wait, there’s more. Defense in depth teaches that vulnerabilities not addressed (or attacks not covered) by one component should, in principle, be caught by another. In some cases a risk may be controlled “upstream” and in others “downstream.”  

Lets consider anexample: a given ML system designmay attempt to secure sensitive training data behind some kind of authentication and authorization system, only allowing the model access to the data while it is actually training. While this may well bea reasonable and well-justified practice, it is by no means sufficient to ensure that no sensitive information in the dataset can be leaked through malicious misuse/abuse of the system as a whole. Some ML models are vulnerable to leaking sensitive information via carefully selected queries made to the operating model.[i] In other cases, lots of know-how in “learned” form may be leaked through a transfer attack.[ii] Maintaining a history of queries made by users, and preventing subsequent queries that together could be used to divine sensitive information can serve as an additional defensive layer that protects against these kinds of attack.  

Practicing defense in depth naturally involves applying the principle of least privilegeto users and operations engineers of an ML system. Identifying and preventing security exploits is much easier when every component limits its accessto only theresources it actually requires.  In this case, identifying and separating components in a design can help, because components become natural trust boundaries where controls can be put in place and policies enforced.

Defense in depth is especially powerful when each component works in concert with the others.

Read the rest of the principles here.


[i]M. Fredrikson, S. Jha, and T. Ristenpart, “Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures,” Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322–1333.

[ii]B. Wang, Y. Yao, B. Viswanath, H. Zheng, and B. Y. Zhao, “With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning,” 27th USENIX Security Symposium, 2018, pp. 1281–1297.

Secure the Weakest Link [Principle 1]

Security people are quick to point out that security is like a chain.  And just as a chain is only as strong as the weakest link, an ML system is only as secure as its weakest component.  Want to anticipate where bad guys will attack your ML system?  Well, think through which part would be easiest to attack.

ML systems are different from many other artifacts that we engineer because the data in ML are just as important (or sometimes even more important) than the learning mechanism itself.  That means we need to pay even more attention to the data used to train, test, and operate an ML system than we might in a standard system.

In some sense, this turns the idea of an attack surface on its head. To understand what we mean, consider that the training data in an ML system may often come from a public location­—that is, one that may be subject to poor data protection controls. If that’s the case, perhaps the easiest way to attack an ML system of this flavor would be through polluting or otherwise manipulating the data before it even arrives. An attacker wins if they get to the ML-critical data before the ML system even starts to learn. Who cares about the public API of the trained up and operating ML system if the data used to build it were already maliciously constructed?

Thinking about ML data as money makes a good exercise.  Where does the “money” (that is, data) in the system come from?  How is it stored?  Can counterfeit money help in an attack? Does all of the money get compressed into high value storage in one place (say the weights and thresholds learned in the ML systems’ distributed representation)?  How does money come out of an ML system?  Can money be transferred to an attacker?  How would that work?

Lets stretch this analogy ever farther. When it comes to actual money, a sort of perverse logic pervades the physical security world. There’s generally more money in a bank than a convenience store, but which one is more likely to be held up? The convenience store, because banks tend to have much stronger security precautions. Convenience stores are a much easier target. Of course the payoff for successfully robbing a convenience store is much lower than knocking off a bank, but it is probably a lot easier to get away from the convenience store crime scene. To stretch our analogy a bit, you want to look for and better defend the convenience stores in your ML system.

ML has another weird factor that is worth considering—that is that much of the source code is open and re-used all over the place.  Should you trust that algorithm that you snagged from GitHub? How does it work? Does it protect those oh so valuable data sets you built up?  What if the algorithm itself is sneakily compromised?  These are some potential weak links that may not be considered in a traditional security stance.

Identifying the weakest component of a system falls directly out of a good risk analysis. Given good risk analysis information, addressing the most serious risk first, instead of a risk that may be easiest to mitigate, is always prudent. Security resources should be doled out according to risk. Deal with one or two major problems, and move on to the remaining ones in order of severity.

Of course, this strategy can be applied forever, because 100% security is never attainable. There is a clear need for some stopping point. It is okay to stop addressing risks when all components appear to be within the thresh­old of acceptable risk. The notion of acceptability depends on the business propo­sition.

All of our analogies aside, good security practice dictates an approach that identifies and strengthens weak links until an acceptable level of risk is achieved.

Read the rest of the principles here.

BIML Security Principles

Early work in security and privacy of ML has taken an “operations security” tack focused on securing an existing ML system and maintaining its data integrity. For example, Nicolas Papernot uses Salzter and Schroeder’s famous security principles to provide an operational perspective on ML security1. In our view, this work does not go far enough into ML design to satisfy our goals. Following Papernot, we directly address Salzter and Schroeder’s security principles as adapted in the book Building Secure Software by Viega and McGraw. Our treatment is more directly tied to security engineering than to security operations.

Security Principles and Machine Learning

In security engineering it is not practical to protect against every type of possible attack. Security engineering is an exercise in risk management. One approach that works very well is to make use of a set of guiding principles when designing and building systems. Good guiding principles tend to improve the security outlook even in the face of unknown future attacks. This strategy helps to alleviate the “attack-of-the-day” problem so common in early days of software security (and also sadly common in early approaches to ML security).

In this series of blog entries we present ten principles for ML security lifted directly from Building Secure Software and adapted for ML. The goal of these principles is to identify and to highlight the most impor­tant objectives you should keep in mind when designing and building a secure ML system. Following these principles should help you avoid lots of ­common security problems. Of course, this set of principles will not be able to cover every possible new flaw lurking in the future.

Some caveats are in order. No list of principles like the one pre­sented here is ever perfect. There is no guarantee that if you follow these principles your ML system will be secure. Not only do our principles present an incomplete picture, but they also sometimes conflict with each other. As with any complex set of principles, there are often subtle tradeoffs involved.

Clearly, application of these ten principles must be sensitive to context. A mature risk management approach to ML provides the sort of data required to apply these principles intelligently.

Principle 1: Secure the Weakest Link

Principle 2: Practice Defense in Depth

Principle 3: Fail Securely

Principle 4: Follow the Principle of Least Privilege

Principle 5: Compartmentalize

Principle 6: Keep It Simple

Principle 7: Promote Privacy

Principle 8: Remember That Hiding Secrets Is Hard

Principle 9: Be Reluctant to Trust

Principle 10: Use Your Community Resources

What will follow in the next few blog entries is a treatment of each of the ten principles from an ML systems engineering perspective.

We’ll start with the first two tonight


1. N. Papernot, “A Marauder’s Map of Security and Privacy in Machine Learning,” arXiv:1811.01134, Nov. 2018. (see https://berryvilleiml.com/references/ for more)