BIML Presents at NBIM 10.18.23

NBIM is the world’s largest sovereign wealth fund

BIML was invited to Oslo to present its views on Machine Learning Security in two presentations at NBIM in October.

The first was delivered to 250+ technologists on staff (plus 25 or so invited guests from all around Norway). During the talk, BIML revealed its “Top Ten LLM Risks” data for the first time (pre-publication).

BIML presented two talks at NBIM

The second session was a fireside chat for 19 senior executives.

BIML on the AP Wire: why red teaming is feeble

The idea that machine learning security is exclusively about “hackers,” “attacks,” or some other kinds of “adversary,” is misguided. This is the same sort of philosophy that misled software security into a myopic overfocus on penetration testing way back in the mid ’90s. Not that pen testing and red teaming are useless, mind you, but there is way more to security engineering that penetrate and patch. It took us forever (well, a decade or more) to get past the pen test puppy love and start building real tools to find actual security bugs in code.

That’s why the focus on Red Teaming AI coming out of the White House this summer was so distressing. On the one hand…OK, the White House said AI and Security in the same sentence; but on the other hand, hackers gonna hack us outta this problem…not so much.

This red teaming nonsense is worse than just a philosophy problem, it’s a technical issue too.  Just take a look at this ridiculous piece of work from Anthropic.

Red Teaming Language Models to Reduce Harms:
Methods, Scaling Behaviors, and Lessons Learned

Red teaming sounds high tech, mysterious and steeped in hacker mystique, but today’s ML systems won’t benefit much from post facto pen testing. We must build security into AI systems from the very beginning (by paying way more attention to the enormous swaths of data used to train them and the risks these data carry). We can’t security test our way out of this corner, especially when it comes to the current generation of LLMs.

It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side. Unfortunately the world well knows what happens when we pretend to be hard at work on security yet what we’re actually doing is more akin to squeezing our eyes shut and claiming to be invisible. Just ask yourself one simple question, who benefits from a security circus in this case?

AP reporter Frank Bajak covered BIML’s angle in this worldwide story August 13, 2023.

https://apnews.com/article/ai-cybersecurity-malware-microsoft-google-openai-redteaming-1f4c8d874195c9ffcc2cdffa71e4f44b

New BIML Member

We are extremely pleased to announce that Katie McMahon has joined BIML as a permanent researcher.

Katie McMahon

Katie McMahon is a global entrepreneur and technology executive who has been at the leading edge of sound recognition and natural language understanding technologies for the past 20 years. As VP at Shazam, she brought the iconic music recognition app to market which went on to reach 2 billion installs and 70 billion queries (Acquired by Apple) and spent over a decade at SoundHound (NASDAQ:SOUN) bringing NLU technology and Voice AI products from lab to market. She has worked for Snap and most recently served as President & COO of Native Voice. She is Advisor to several early staged tech companies, including Neosensory, Valence Vibrations, NatureQuant, and Wia.io. McMahon is the lead inventor on several patents involving methods of Automatic Speech Recognition, Natural Language Understanding and Augmented Reality. She earned a BA in Political & Social Thought from The University of Virginia and has attended and completed coursework at Stanford, M.I.T. Sloan, the London School of Economics and Political Science, and most recently, earned the Corporate Board Readiness badge certificate from the Leavey School of Business at Santa Clara University in Silicon Valley. Katie is most interested in understanding how rapidly evolving AI and the wider tech landscape stands to impact business, society and humanity at large. 

BIML Participates in Calypso AI’s AccelerateAI2023

As the world is rapidly advancing technologically, it is vital to understand the implications and opportunities presented by Large Language Models (LLMs) in the realm of national security and beyond. This discussion will bring together leading experts from various disciplines to share insights on the risks, ethical considerations, and potential benefits of utilizing LLMs for intelligence, cybersecurity, and other applications.

https://accelerate.calypsoai.com/2023#Speakers

Panel on ML and Architectural Risk Analysis (aka Threat Modeling)

Irius Risk, a company specializing in automating threat modeling for software security, hosted a webinar on Machine Learning and Threat Modeling March 30, 2023. BIML CEO Gary McGraw participated in the webinar along with Adam Shostack.

The webinar was recorded and you can watch here. FWIW, we are still not exactly clear on Adam’s date of replacement.

BIML Keynotes National Science Foundation Meeting

Every bunch of years, the National Science Foundation holds vision workshops to discuss scientific progress in fields they support. This year BIML’s Gary McGraw was pleased to keynote the Computer Science “Secure and Trustworthy Cyberspace” meeting.

He gave a talk on what #MLsec can learn from #swsec with a focus on technology discover, development, and commercialization. There are many parallels between the two fields. Now is a great time to be working in machine learning security.

You can download the slides here.

AMA for Open Security Summit Zeros in on Machine Learning

BIML CEO Gary McGraw participates in an AMA

Lots of excellent content on ML Security, ML, and security in this video. Have a look.