MLsec Musings

  • The principle of least privilege states that only the minimum access necessary to perform an operation should be granted, and that access should be granted only for the minimum amount of time necessary.[i]

    When you give out access to parts of a system, there is always some risk that the privileges associated with that access will be abused. For example, let’s say you are to go on vacation and you give a friend the key to your home, just to feed pets, collect mail, and so forth. Although y...

  • Even under ideal conditions, complex systems are bound to fail eventually. Failure is an unavoidable state that should always be planned for. From a security perspective, failure itself isn’t the problem so much as the tendency for many systems to exhibit insecure behavior when they fail.

    The best real-world example we know is one that bridges the real world and the electronic world—credit card authentication. Big credit card companies such as Visa and MasterCard spend lots of money on au...

  • The idea behind defense in depth is to manage risk with diverse defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense hopefully prevents a full breach.

    Let’s go back to our example of bank security. Why is the typical bank more secure than the typical convenience store? Because there are many redundant security measures protecting the bank, and the more measures there are, the more secure the place is.

    Security cameras alone are a de...

  • Security people are quick to point out that security is like a chain.  And just as a chain is only as strong as the weakest link, an ML system is only as secure as its weakest component.  Want to anticipate where bad guys will attack your ML system?  Well, think through which part would be easiest to attack.

    ML systems are different from many other artifacts that we engineer because the data in ML are just as important (or sometimes even more important) than the learning mechanism itself....

  • BIML Security Principles

    gem

    25 July 2019

    Early work in security and privacy of ML has taken an “operations security” tack focused on securing an existing ML system and maintaining its data integrity. For example, Nicolas Papernot uses Salzter and Schroeder’s famous security principles to provide an operational perspective on ML security.[i] In our view, this work does not go far enough into ML design to satisfy our goals. Following Papernot, we directly address Salzter and Schroeder’s security principles as adapted in the book Buil...