Once you learn that many of your new applications have ML built into them (often regardless of policy), what’s the next step? Threat modeling, of course. Irius Risk, the worldwide leader for threat modeling automation, announced a threat modeling library covering ML risks identified by BIML on October 26, 2023.
This is the first tool in the world to include ML risk as part of threat modeling automation. Now we’re getting somewhere.
Darkreading was the first publication to cover the news, and remains the best source for cutting edge stories about machine learning security. Read the original story here.
First MLsec talk on the BIML ARA Delivered Ultra Locally
The first talk on BIML’s new Architectural Risk Analysis of Machine Learning Systems was delivered this Wednesday at Lord Fairfax Community College. The talk was well attended and included a remote audience attending virtually. The Winchester Star published a short article about the talk.
Berryville Institute of Machine Learning (BIML) is located in Clarke County, Virginia, an area served by Lord Fairfax Community College.
Welcome to the BIML blog where we will (informally) write about MLsec, otherwise known as Machine Learning security. BIML is short for the Berryville Institute of Machine Learning. For what it’s worth, we think it is pretty amusing to have a “Berryville Institute” just like Santa Fe has the “Santa Fe Institute.” You go, Berryville!
BIML was born when I retired from my job of 24 years in January 2019. Many years ago as a graduate student at Indiana University, I did lots of work in machine learning and AI as part of my Ph.D. program in Cognitive Science. As a student of Doug Hofstadter’s I was deeply interested in emergent computation, sub-symbolic representation, error making, analogy, and low-level perceptual systems. I was fortunate to be surrounded by lots of fellow students interested in building things and finding out how the systems we were learning about by reading papers actually worked. Long story short, we built lots of real systems and published a bunch of papers about what we learned in the scientific literature.
Our mission at BIML is to explore the security implications built into ML systems. We’re starting with neural networks, which are all the rage at the moment, but we intend to think and write about genetic algorithms, sparse distributed memory, and other ML systems. Just to make this perfectly clear, we’re not really thinking much about using ML for security, rather we are focused on the security of ML systems themselves.
Fast forward 24 years. As one of the fathers of software security and security engineering at the design level, I have been professionally interested in how systems break, what kinds of risks are inherent in system design, and how to design systems that are more secure. At BIML we are applying these techniques and approaches directly to ML systems.
Through a series of small world phenomenon, the BIML group coalesced, sparked first when I met Harold Figueroa at an Ntrepid Technical Advisory Board meeting in the Fall of 2018 (I am the Chair of Ntrepid’s TAB, and Harold leads Ntrepid’s machine learning research group). Harold and I had a great initial discussion over dinner about representation, ML progress, and other issues. We decided that continuing those discussions and digging into some research was in order. Victor Shepardson, who did lots of ML work at Dartmouth as a Masters student, was present for our first meeting in January. We quickly added Richie Bonett, a Berryville local like me (!!) and a Ph.D. student at William and Mary, to the group. And BIML was officially born.
We started with a systematic and in depth review of the MLsec literature. You can see the results of our work in the annotated bibliography that we will continue to curate as we read and discuss papers.
Our second task was to develop an Attack Taxonomy that makes sense at a meta-level of the burgeoning ML attack literature. These days, lots of energy is being expended to attack certain ML systems. Some of the attacks are quite famous (stop sign recognition, and seeing cats as tanks both come to mind), and the popular press has made much of both ML progress and amusing attacks against ML systems. You can review the (ongoing) Attack Taxonomy work elsewhere on our website.
We’re now in the midst of an Architectural Risk Analysis (ARA) of a generic ML system. Our approach follows the ARA process introduced in my book Software Security and applied professionally for many years at Cigital. We plan to publish our work here as we make progress.
We’re really having fun with this group, and we hope you will get as much of a kick out of our results as we’re getting. We welcome contact and collaboration. Please let us know what you think.