McGraw Talks to Lemos About ML Security Research
Veteran tech reporter Rob Lemos had a few questions for BIML regarding ML security hacking (aka red teaming) and the DMCA. Here is the resulting darkreading article. See the original questions which flesh out BIML’s position more clearly below.
Lemos: I’m familiar with attempts to use DMCA to stifle security research in general. What are the dangers specifically to AI security and safety researchers? Have there been any actual legal cases where a DMCA violation was alleged against a AI researcher?
BIML: I am unaware of DMCA cases against MLsec researchers.
I will state for the record that we are not going to red team or pen test out way to AI trustworthiness. The real way to secure ML is at the design level with a strong focus on training data, representation, and evaluation. Pen testing has high sex appeal but limited effectiveness.
As designed today, ML systems have flaws that can be exposed by hacking but not fixed by hacking. See the BIML work for a whole host of risks to think about.
The biggest challenge for MLsec researchers is overcoming hype and disinformation from vendors. In my view the red teaming stuff plays right into this by focusing attention on marginally relevant things.
Lemos: What changes are the US Copyright Office experts considering to support AI security and safety research? How do these changes redraw the current line between allowed and illegal research?
BIML:The changes appear to be merely cosmetic. The high irony is that AI vendors themselves are subject to many suits involving misuse of copyrighted material during training. My prediction is that we will need clarity there first before we fret about “hacking.”
Note that there are some very good scientists working on real MLsec (see the BIML bibliography top 5 for some stellar examples).
Lemos: Are AI companies pushing back against these efforts? Do you think we will end up with a “reasonable” approach (for whatever definition of reasonable you want)?
Most AI vendors seem to acknowledge various problems and then downplay them with cutesy names like “hallucinate” instead of “WRONG” and “prompt injection” instead of “broken, ill-formed, under-defined natural language API.”
We need to clean up the black box itself (and it’s interfaces), not try to constrain bad black box behavior with an infinite set of filters. In security engineering we know all about the futility of black listing vs white listing…but black list filtering seems to be the current sanctioned approach.
Lemos: From your viewpoint, should we err on the side of protecting innovation (i.e., first movers) or on the side of ensuring safety and security? Why?
BIML: Security is always in a tradeoff against functionality. The glib answer is, “both.” We should not shut down security engineering and testing, but we should also not throw out the ML baby with the security bathwater.
If your enterprise is adopting ML for its powerful features, make sure to do a proper in-depth threat model of your system and cover your risks with appropriate controls.
0 Comments