This version of the Security Engineering for Machine Learning talk is focused on computer scientists familiar with algorithms and basic machine learning concepts. It was delivered 2/24/22.
In an article published in February 2022, BIML CEO Gary McGraw discusses why ML practitioners need to consider ops data exposure in addition to worrying about training data. Have a read.
This is the first in a series of two articles focused on data privacy and ML. This one, the first, focuses on ops data exposure. The second discusses training data in more detail.
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to continue the bi-monthly video series we’re calling BIML in the Barn.
Our third video talk features Ram Shankar Siva Kumar a researcher at Microsoft Azure working on Adversarial Machine Learning. Of course, we prefer to call this Security Engineering for Machine Learning. Lots of good stuff in this talk about regulation, compliance, security, and privacy.
Ram ponders, “why is your toaster more trustworthy than your self-driving car?”
It turns out that operational data exposure swamps out all other kinds of data exposure and data security issues in ML, something that came as a surprise.
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to introduce a bi-monthly video series we’re calling BIML in the Barn.
Our first video talk features Maritza Johnson, a professor at UC San Diego and an expert on human-centered security and privacy. As you’re about to see, Maritza combines real-world experience from industry, teaching, and research, making her message relevant to a wide audience.
The (extremely) local paper in the county where Berryville is situated (rural Virginia) is distributed by mail. They also have a website, but that is an afterthought at best.
Fortunately, the Clarke Monthly is on the cutting edge of technology reporting. Here is an article featuring BIML and Security Engineering for Machine Learning.
I gave a talk this week at a meeting hosted by Microsoft and Mitre called the 6th Security Data Science Colloquium. It was an interesting bunch (about 150 people) including the usual suspects: Microsoft, Google, Facebook, a bunch of startups and universities, and of course BIML.
I decided to rant about nomenclature, with a focus on RISKS versus ATTACKS as a central tenet of how to approach ML security. Heck, even the term “Adversarial AI” gets it wrong in all the ways. For the record, we call the field we are in “Machine Learning Security.”
In our view at BIML, every attack has a one or more risks behind it, but every risk in the BIML-78 does not have an associated attack. For us, it is obvious that we should work on controlling risks NOT stopping attacks one at a time.
Another week, another talk in Indiana! This time Purdue’s CERIAS center was the target. Turns out I have given “one talk per decade” at Purdue, starting with a 2001 talk (then 2009). Here is the 2021 edition.
What will I be talking about in 2031??!
BIML Speaks at Indiana University in the CACR Series
BIML founder Gary McGraw delivered the last talk of the semester for the Center for Applied Cybersecurity Research (CACR) speakers series at Indiana University. You can watch the talk on YouTube.
If your organization is interested in having a presentation by BIML, please contact us today.