Indiana University SPICE Talk

BIML’s work was featured in a April 5th talk at the Luddy Center for Artificial Intelligence, part of Indiana University.

Here is the talk abstract. If you or your organization are interested in hosting this talk, please let us know.

10, 23, 81 — Stacking up the LLM Risks: Applied Machine Learning Security

I present the results of an architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks previously identified by BIML in 2020. After a brief level-set, I cover the top 10 LLM risks, then detail 23 black box LLM foundation model risks screaming out for regulation, finally providing a bird’s eye view of all 81 LLM risks BIML identified.  BIML’s first work, published in January 2020 presented an in-depth ARA of a generic machine learning process model, identifying 78 risks.  In this talk, I consider a more specific type of machine learning use case—large language models—and report the results of a detailed ARA of LLMs. This ARA serves two purposes: 1) it shows how our original BIML-78 can be adapted to a more particular ML use case, and 2) it provides a detailed accounting of LLM risks. At BIML, we are interested in “building security in” to ML systems from a security engineering perspective. Securing a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This ARA is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.

2 Comments

  1. In the company I work for there is a group in which we bring presentations and topics related to cybersecurity, so there are presenters from academia and industry, I looked at their work and found it very interesting, I would like to know if there is any interest in doing this presentation online for us.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>