Be Reluctant to Trust [Principle 9]
ML systems rely on a number of possibly untrusted, external sources for both their data and their computation. Let’s take on data first. Mechanisms used to collect and process data for training and evaluation make an obvious target. Of course, ML engineers need to get their data somehow, and this necessarily invokes the question of trust. How does an ML system know it can trust the data it’s being fed? And, more generally, what can the system do to evaluate the collector’s trustworthiness? Blindly trusting sources of information would expose the system to security risks and must be avoided.
Next, let’s turn to external sources of computation. External tools such as TensorFlow, Kubeflow, and pip can be evaluated based on the security expertise of their engineers, time-proven resilience to attacks, and their own reliance on further external tools, among other metrics. Nonetheless, it would be a mistake to assume that any external tool is infallible. Systems need to extend as little trust as possible, in the spirit of compartmentalization, to minimize the capabilities of threats operating through external tools.
It can help to think of the various components of an ML system as extending trust to one another; dataset assembly could trust the data collectors’ organization of the data, or it could build safeguards to ensure normalization. The inference algorithm could trust the model’s obfuscation of training data, or it could avoid responding to queries that are designed to extract sensitive information. Sometimes it’s more practical to trust certain properties of the data, or various components, but in the interests of secure design only a minimum amount of trust should be afforded. Building more security into each component makes attacks much more difficult to successfully orchestrate.
0 Comments