Indiana University SPICE Talk

BIML’s work was featured in a April 5th talk at the Luddy Center for Artificial Intelligence, part of Indiana University.

Here is the talk abstract. If you or your organization are interested in hosting this talk, please let us know.

10, 23, 81 — Stacking up the LLM Risks: Applied Machine Learning Security

I present the results of an architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks previously identified by BIML in 2020. After a brief level-set, I cover the top 10 LLM risks, then detail 23 black box LLM foundation model risks screaming out for regulation, finally providing a bird’s eye view of all 81 LLM risks BIML identified.  BIML’s first work, published in January 2020 presented an in-depth ARA of a generic machine learning process model, identifying 78 risks.  In this talk, I consider a more specific type of machine learning use case—large language models—and report the results of a detailed ARA of LLMs. This ARA serves two purposes: 1) it shows how our original BIML-78 can be adapted to a more particular ML use case, and 2) it provides a detailed accounting of LLM risks. At BIML, we are interested in “building security in” to ML systems from a security engineering perspective. Securing a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This ARA is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.

Tech Target Podcast: BIML Discusses 23 Black Box LLM Foundation Model Risks

A recently-released podcast features a in-depth discussion of BIML’s recent LLM Risk Analysis, defining terms in easy to understand fashion. We cover what exactly a RISK IS, whether open source LLMs make any sense, how big BIG DATA really is, and more.

https://targetingai.podbean.com/e/security-bias-risks-are-inherent-in-genai-black-box-models/

Have a listen here https://targetingai.podbean.com/e/security-bias-risks-are-inherent-in-genai-black-box-models/

BIML LLM Risk Analysis Debuted at NDSS’24

The first public presentation of BIML’s LLM work was presented in San Diego February 26th as an invited talk for three conference workshops (simultaneously).  The workshops coincided with NDSS.  All NDSS ’24 workshops: https://www.ndss-symposium.org/ndss2024/co-located-events/

  1. SDIoTSec: https://www.ndss-symposium.org/ndss2024/co-located-events/sdiotsec/
  2. USEC: https://www.ndss-symposium.org/ndss2024/co-located-events/usec/
  3. AISCC: https://www.ndss-symposium.org/ndss2024/co-located-events/aiscc/.

This was the first public presentation of the BIML LLM Top Ten Risks list since its publication.

When ML goes wrong, who pays the price?

Air Canada is learning the hard way that when YOUR chatbot on YOUR website is wrong, YOU pay the price. This is as it should be. This story from CTV News is a great development.

BIML warned about this in our LLM Risk Analysis report published 1.24.24. In particular, see:

[LLMtop10:9:model trustworthiness] Generative models, including LLMs, include output sampling algorithms by their very design. Both input (in the form of slippery natural language prompts) and generated output (also in the form of natural language) are wildly unstructured (and are subject to the ELIZA effect). But mostly, LLMs are auto-associative predictive generators with no understanding or reasoning going on inside. Should LLMs be trusted? Good question.

[inference:3:wrongness] LLMs have a propensity to be just plain wrong. Plan for that. (Using anthropomorphic terminology for error-making, such as the term “hallucinate” is not at all helpful.)

[output:2:wrongness] Prompt manipulation can lead to fallacious output (see [input:2:prompt injection]), but fallacious output can occur spontaneously as well. LLMs are notorious BS-ers that can make stuff up to justify their wrongness. If that output escapes into the world undetected, bad things can happen. If such output is later consumed by an LLM during training, recursive pollution is in effect.

Do you trust that black box foundation model you built your LLM application on? Why?

The More Things Change, the More They Stay The Same: Defending Against Vulnerabilities you Create

Regarding the AP wire story out this morning (which features a quote by BIML):

Like any tool that humans have created, LLMs can be repurposed to do bad things.  The biggest danger that LLMs pose in security is that they can leverage the ELIZA effect to convince gullible people into believing they are thinking and understanding things. This makes them particularly interesting in attacks that involve what security people call “spoofing.”  Spoofing is important enough as an attack category that Microsoft included it in it’s STRIDE system as the very first attack to worry about.  There is no doubt that LLMs make spoofing much more powerful as an attack. This includes creating and using “deep fakes” FWIW.  Phishing attacks? Spoofing. Confidence flim-flams? Spoofing. Ransomware negotiations? Spoofing will help. Credit card fraud? Spoofing used all the time.

Twenty years ago the security community found it pretty brazen that Microsoft was thinking about selling defensive security tools at all since many of the attacks and exploits in the wild were successfully targeting their broken software. “Why don’t they just fix the broken software instead of monetizing their own bugs?” we asked.  We might ask the same thing today. Why not create more secure black box LLM foundation models instead of selling defensive tools for a problem they are helping to create?

Decypher Podcast Features BIML LLM Work

The February 6th episode of Dennis Fisher’s Decypher podcast does an excellent job unpacking BIML’s latest work on LLMs. Have a listen: https://duo.com/decipher/decipher-podcast-gary-mcgraw-on-ai-security

Podcast Episode

The Silver Bullet podcast archive (all 153 episodes) can be found here.

Dennis Fisher Covers BIML and Data Feudalism

Here is an excellent piece from Dennis Fisher (currently writing for decipher) covers our new LLM Architectural Risk Analysis. Dennis always produces accurate and tightly-written work.

https://duo.com/decipher/for-ai-risk-the-real-answer-has-to-be-regulation

This article includes an important section on data feudalism, a term that BIML coined in an earlier Decipher article:

“Massive private data sets are now the norm and the companies that own them and use them to train their own LLMs are not much in the mood for sharing anymore. This creates a new type of inequality in which those who own the data sets control how and why they’re used, and by whom.

‘The people who built the original LLM used the whole ocean of data, but then they started dividing [the ocean] up, [leading] to data feudalism. Which means you can’t build your own model because you don’t have access to [enough] data,’ McGraw said.”