BIML in darkreading 2: a focus on training data

The second in a two part darkreading series focused on machine learning data exposure and data-related risk focuses attention on protecting training data without screwing it up. For the record, we believe that technical approaches like synthetic data creation and differential privacy definitely screw up your data, sometimes so much that the ML activity you wanted to accomplish is no longer feasible.

Read the article now.

The first article in the series can be found here. That article introduces the often-ignored problem of operational query data exposure.


Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>