Tickets for the Barns of Rose Hill talk are available now. Get yourself some here!
BIML in the Barn, Episode 3: Ram Shankar Siva Kumar, Microsoft
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to continue the bi-monthly video series we’re calling BIML in the Barn.
Our third video talk features Ram Shankar Siva Kumar a researcher at Microsoft Azure working on Adversarial Machine Learning. Of course, we prefer to call this Security Engineering for Machine Learning. Lots of good stuff in this talk about regulation, compliance, security, and privacy.
Ram ponders, “why is your toaster more trustworthy than your self-driving car?”
Training the Data Elephant in the AI Room
It turns out that operational data exposure swamps out all other kinds of data exposure and data security issues in ML, something that came as a surprise.
Check out this darkreading article detailing this line of thinking.
Introducing BIML in the Barn Video Series
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to introduce a bi-monthly video series we’re calling BIML in the Barn.
Our first video talk features Maritza Johnson, a professor at UC San Diego and an expert on human-centered security and privacy. As you’re about to see, Maritza combines real-world experience from industry, teaching, and research, making her message relevant to a wide audience.
Berryville Meets Silicon Valley
The (extremely) local paper in the county where Berryville is situated (rural Virginia) is distributed by mail. They also have a website, but that is an afterthought at best.
Fortunately, the Clarke Monthly is on the cutting edge of technology reporting. Here is an article featuring BIML and Security Engineering for Machine Learning.
Have a read and pass it on!
Attacks, Risks, Security Engineering and ML
I gave a talk this week at a meeting hosted by Microsoft and Mitre called the 6th Security Data Science Colloquium. It was an interesting bunch (about 150 people) including the usual suspects: Microsoft, Google, Facebook, a bunch of startups and universities, and of course BIML.
I decided to rant about nomenclature, with a focus on RISKS versus ATTACKS as a central tenet of how to approach ML security. Heck, even the term “Adversarial AI” gets it wrong in all the ways. For the record, we call the field we are in “Machine Learning Security.”
Here is one of the slides in my deck. You can get the whole deck here.
In our view at BIML, every attack has a one or more risks behind it, but every risk in the BIML-78 does not have an associated attack. For us, it is obvious that we should work on controlling risks NOT stopping attacks one at a time.
BIML at Purdue
Another week, another talk in Indiana! This time Purdue’s CERIAS center was the target. Turns out I have given “one talk per decade” at Purdue, starting with a 2001 talk (then 2009). Here is the 2021 edition.
What will I be talking about in 2031??!
BIML Speaks at Indiana University in the CACR Series
BIML founder Gary McGraw delivered the last talk of the semester for the Center for Applied Cybersecurity Research (CACR) speakers series at Indiana University. You can watch the talk on YouTube.
If your organization is interested in having a presentation by BIML, please contact us today.
BIML featured in Darkreading
Some nice coverage in the security press for our work at BIML. Thanks to Rob Lemos!
Martiza Johnson Joins BIML to Discuss Social Justice, Bias, and ML
As our MLsec work makes abundantly clear, data play a huge role in security of an ML system. Our estimation is that somewhere around 60% of all security risk in ML can be directly associated with data. And data are biased in ways that lead to serious social justice problems including racism, sexism, classism, and xenophobia. We’ve read a few ML bias papers (see the BIML Anotated Bibliography for our commentary). Turns out that social justice in ML is a thorny and difficult subject.
We were joined this week by Martiza Johnson, a Computer Scientist and the inaugural director of a new center for data science, AI, and society at the University of San Diego. Maritza assigned us some homework (reading Chapter One and Chapter Four of Data Feminism, this blog entry, and watching Coded Bias), and then led us in a very interesting and far ranging conversation on bias in ML.
We recorded our conversation with Maritza which you can listen to. A video of our conversation is below.