Blog

Results

Bibliography

Berryville Institute of Machine Learning

Research at BIML focuses on machine learning security---the idea of building security in as applied to machine learning technology itself. Our groundbreaking 2020 work introduced the BIML-78, a set of security risks associated with a generic ML process model. This early work has been applied in the Irius Risk threat modeling tool, at Google for internal analysis, and by the United States Air Force in research grant solicitations. We continue active research on ML risk grounded in a working theory of distributed representation.

At BIML, we believe that moving beyond “adversarial AI” and all-too-prevalent “attack of the day” approach to AI security is essential. In our view, the presence of an “adversary” is by no means necessary when it comes to design-level risks in machine learning systems. That is, sometimes risks don’t require a specific attacker with specific motivations to be risks. Insecure systems invite attacks (whether or not such attacks may have yet been discovered). That’s why we describe our field of work as “machine learning security” instead of “adversarial AI.”

We are happy to report that some real progress has been made in applied ML security since the publication of our original generic ARA in 2020. Technology now exists: 1) to identify ML applications in use (in order to build an inventory), 2) to threat model an applied ML application, and 3) to build controls around specific ML risks described in such a threat model. Our objective now is to put a finer point on LLM-related ML risks so that appropriate controls can be identified and put in place.

BIML is located at the foot of the Blue Ridge mountains on the banks of the Shenandoah river in Berryville, Virginia.