New BIML Member

We are extremely pleased to announce that Katie McMahon has joined BIML as a permanent researcher.

Katie McMahon

Katie McMahon is a global entrepreneur and technology executive who has been at the leading edge of sound recognition and natural language understanding technologies for the past 20 years. As VP at Shazam, she brought the iconic music recognition app to market which went on to reach 2 billion installs and 70 billion queries (Acquired by Apple) and spent over a decade at SoundHound (NASDAQ:SOUN) bringing NLU technology and Voice AI products from lab to market. She has worked for Snap and most recently served as President & COO of Native Voice. She is Advisor to several early staged tech companies, including Neosensory, Valence Vibrations, NatureQuant, and Wia.io. McMahon is the lead inventor on several patents involving methods of Automatic Speech Recognition, Natural Language Understanding and Augmented Reality. She earned a BA in Political & Social Thought from The University of Virginia and has attended and completed coursework at Stanford, M.I.T. Sloan, the London School of Economics and Political Science, and most recently, earned the Corporate Board Readiness badge certificate from the Leavey School of Business at Santa Clara University in Silicon Valley. Katie is most interested in understanding how rapidly evolving AI and the wider tech landscape stands to impact business, society and humanity at large. 

BIML Participates in Calypso AI’s AccelerateAI2023

As the world is rapidly advancing technologically, it is vital to understand the implications and opportunities presented by Large Language Models (LLMs) in the realm of national security and beyond. This discussion will bring together leading experts from various disciplines to share insights on the risks, ethical considerations, and potential benefits of utilizing LLMs for intelligence, cybersecurity, and other applications.

https://accelerate.calypsoai.com/2023#Speakers

Panel on ML and Architectural Risk Analysis (aka Threat Modeling)

Irius Risk, a company specializing in automating threat modeling for software security, hosted a webinar on Machine Learning and Threat Modeling March 30, 2023. BIML CEO Gary McGraw participated in the webinar along with Adam Shostack.

The webinar was recorded and you can watch here. FWIW, we are still not exactly clear on Adam’s date of replacement.

BIML Keynotes National Science Foundation Meeting

Every bunch of years, the National Science Foundation holds vision workshops to discuss scientific progress in fields they support. This year BIML’s Gary McGraw was pleased to keynote the Computer Science “Secure and Trustworthy Cyberspace” meeting.

He gave a talk on what #MLsec can learn from #swsec with a focus on technology discover, development, and commercialization. There are many parallels between the two fields. Now is a great time to be working in machine learning security.

You can download the slides here.

AMA for Open Security Summit Zeros in on Machine Learning

BIML CEO Gary McGraw participates in an AMA

Lots of excellent content on ML Security, ML, and security in this video. Have a look.

Shostack on ML and Threat Modeling

Adam Shostack is one of the pre-eminent experts on threat modeling. So when he publishes an article, it is always worth reading and thinking about. But Adam seems to be either naïve or insanely optimistic when it comes to AI/ML progress. ML has no actual IDEA what it’s doing. Don’t ever forget that.

This issue is so important that we plan to debate it soon in a webinar format. Contact us for details.

Read adam’s article here.

ML and Automated Coding: Not Ready for Prime Time

As a software security guy, I am definitely in tune with the idea of automated coding. But today’s “code assistants” do not have any design-level understanding of code. Plus they copy (statistically-speaking, anyway) chunks of code full of bugs.

Robert Lemos wrote a very timely article on the matter. Check it out.

https://readme.security/ai-code-assistants-need-security-training-fb1b81acc85a

BIML in darkreading 2: a focus on training data

The second in a two part darkreading series focused on machine learning data exposure and data-related risk focuses attention on protecting training data without screwing it up. For the record, we believe that technical approaches like synthetic data creation and differential privacy definitely screw up your data, sometimes so much that the ML activity you wanted to accomplish is no longer feasible.

Read the article now.

The first article in the series can be found here. That article introduces the often-ignored problem of operational query data exposure.