ML and Automated Coding: Not Ready for Prime Time

As a software security guy, I am definitely in tune with the idea of automated coding. But today’s “code assistants” do not have any design-level understanding of code. Plus they copy (statistically-speaking, anyway) chunks of code full of bugs.

Robert Lemos wrote a very timely article on the matter. Check it out.

https://readme.security/ai-code-assistants-need-security-training-fb1b81acc85a

BIML in darkreading 2: a focus on training data

The second in a two part darkreading series focused on machine learning data exposure and data-related risk focuses attention on protecting training data without screwing it up. For the record, we believe that technical approaches like synthetic data creation and differential privacy definitely screw up your data, sometimes so much that the ML activity you wanted to accomplish is no longer feasible.

Read the article now.

The first article in the series can be found here. That article introduces the often-ignored problem of operational query data exposure.

Discussing MLsec at the Local Retirement Community

As part of our mission to spread the word about machine learning security far and wide, we were pleased to deliver a talk at Westmister-Canterbury in the Shenandoah Valley.

The talk posed a bit of a challenge since it was the very first “Thursday talk” delivered after COVID swept the planet. As you might imagine, seniors who are smart are very much wary of the pandemic. In the end, the live talk was delivered to around 12 people with an audience of about 90 on closed circuit TV. That, and the fact that these accomplished seniors came from all walks of life, made this an interesting iteration of the BIML talk.

Watch for yourself!

Let us know if your group would appreciate a talk from BIML.

Think Global, Talk Local

We’re pleased that BIML has helped spread the word about MLsec (that is, machine learning security engioneering) all over the world. We’ve given talks in Germany, Norway, England, and, of course, all over the United States.

And we’re always up for more. If you are interested in having BIML participate in your conference, please contact Gary McGraw through his website.

This summer, we were asked to give a talk at our local community center, the Barns of Rose Hill. We were happy to oblige. We even donated all proceeds to FISH.

Those of you who are deep geeks will know what a challenge it can be to communicate subtle technology ideas to normal people. BIML prides itself on being able to do that. The issues we’re researching are important and have a direct impact on our society as a whole. For that reason, spreading the word among non-technical normals is critical to our mission.

BIML in the Barn Episode 4: David Evans, University of Virginia

An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to continue the bi-monthly video series we’re calling BIML in the Barn.

Our fourth video talk features Professor David Evans a computer scientist at University of Virginia working on Security Engineering for Machine Learning. David is interested in the same notions of representation and generalization that we’re interested in at BIML.

Watch Dave’s video here.

Sadly, BIML stopped producing BIML in the Barn episodes after our super talented videographer moved to Berlin. We may restart if we come up with a pile of money to produce more videos. Sponsors welcome!

Dr. McGraw Delivers labcorp “Leadership in Technology” talk at NCSU

This version of the Security Engineering for Machine Learning talk is focused on computer scientists familiar with algorithms and basic machine learning concepts. It was delivered 2/24/22.

You can watch the video on YouTube here https://youtu.be/Goe0Sbn5Ma8

BIML in darkreading: ops data exposure versus training data exposure

In an article published in February 2022, BIML CEO Gary McGraw discusses why ML practitioners need to consider ops data exposure in addition to worrying about training data. Have a read.

This is the first in a series of two articles focused on data privacy and ML. This one, the first, focuses on ops data exposure. The second discusses training data in more detail.

BIML talk in Berryville July 1st

BIML co-founder and CEO Gary McGraw will deliver a public lecture at the Barns of Rose Hill on Friday July 1st. All proceeds benefit FISH of Clarke County.

Tickets for the Barns of Rose Hill talk are available now. Get yourself some here!

BIML in the Barn, Episode 3: Ram Shankar Siva Kumar, Microsoft

An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to continue the bi-monthly video series we’re calling BIML in the Barn.

Our third video talk features Ram Shankar Siva Kumar a researcher at Microsoft Azure working on Adversarial Machine Learning. Of course, we prefer to call this Security Engineering for Machine Learning. Lots of good stuff in this talk about regulation, compliance, security, and privacy.

Ram ponders, “why is your toaster more trustworthy than your self-driving car?”

Here’s Ram!

Training the Data Elephant in the AI Room

It turns out that operational data exposure swamps out all other kinds of data exposure and data security issues in ML, something that came as a surprise.

Check out this darkreading article detailing this line of thinking.