BIML in darkreading 2: a focus on training data
The second in a two part darkreading series focused on machine learning data exposure and data-related risk focuses attention on protecting training data without screwing it up. For the record, we believe that technical approaches like synthetic data creation and differential privacy definitely screw up your data, sometimes so much that the ML activity you wanted to accomplish is no longer feasible.
The first article in the series can be found here. That article introduces the often-ignored problem of operational query data exposure.