BIML LLM Risk Analysis Debuted at NDSS’24

The first public presentation of BIML’s LLM work was presented in San Diego February 26th as an invited talk for three conference workshops (simultaneously).  The workshops coincided with NDSS.  All NDSS ’24 workshops: https://www.ndss-symposium.org/ndss2024/co-located-events/

  1. SDIoTSec: https://www.ndss-symposium.org/ndss2024/co-located-events/sdiotsec/
  2. USEC: https://www.ndss-symposium.org/ndss2024/co-located-events/usec/
  3. AISCC: https://www.ndss-symposium.org/ndss2024/co-located-events/aiscc/.

This was the first public presentation of the BIML LLM Top Ten Risks list since its publication.

When ML goes wrong, who pays the price?

Air Canada is learning the hard way that when YOUR chatbot on YOUR website is wrong, YOU pay the price. This is as it should be. This story from CTV News is a great development.

BIML warned about this in our LLM Risk Analysis report published 1.24.24. In particular, see:

[LLMtop10:9:model trustworthiness] Generative models, including LLMs, include output sampling algorithms by their very design. Both input (in the form of slippery natural language prompts) and generated output (also in the form of natural language) are wildly unstructured (and are subject to the ELIZA effect). But mostly, LLMs are auto-associative predictive generators with no understanding or reasoning going on inside. Should LLMs be trusted? Good question.

[inference:3:wrongness] LLMs have a propensity to be just plain wrong. Plan for that. (Using anthropomorphic terminology for error-making, such as the term “hallucinate” is not at all helpful.)

[output:2:wrongness] Prompt manipulation can lead to fallacious output (see [input:2:prompt injection]), but fallacious output can occur spontaneously as well. LLMs are notorious BS-ers that can make stuff up to justify their wrongness. If that output escapes into the world undetected, bad things can happen. If such output is later consumed by an LLM during training, recursive pollution is in effect.

Do you trust that black box foundation model you built your LLM application on? Why?

The More Things Change, the More They Stay The Same: Defending Against Vulnerabilities you Create

Regarding the AP wire story out this morning (which features a quote by BIML):

Like any tool that humans have created, LLMs can be repurposed to do bad things.  The biggest danger that LLMs pose in security is that they can leverage the ELIZA effect to convince gullible people into believing they are thinking and understanding things. This makes them particularly interesting in attacks that involve what security people call “spoofing.”  Spoofing is important enough as an attack category that Microsoft included it in it’s STRIDE system as the very first attack to worry about.  There is no doubt that LLMs make spoofing much more powerful as an attack. This includes creating and using “deep fakes” FWIW.  Phishing attacks? Spoofing. Confidence flim-flams? Spoofing. Ransomware negotiations? Spoofing will help. Credit card fraud? Spoofing used all the time.

Twenty years ago the security community found it pretty brazen that Microsoft was thinking about selling defensive security tools at all since many of the attacks and exploits in the wild were successfully targeting their broken software. “Why don’t they just fix the broken software instead of monetizing their own bugs?” we asked.  We might ask the same thing today. Why not create more secure black box LLM foundation models instead of selling defensive tools for a problem they are helping to create?

Absolute Nonsense from Anthropic: Sleeper Agents

And in the land where I grew up
Into the bosom of technology
I kept my feelings to myself
Until the perfect moment comes

-David Byrne

From its very title—Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training—you get the first glimpse of the anthropomorphic mish-mosh interpretation of LLM function that infects this study. Further on, any doubts about this deeply-misguided line of reasoning (and its detrimental effects on actual work in machine learning security) are resolved for the worst.

Caught in, or perhaps parroting, the FUD-fueled narrative of “the existential threat of AI,” the title evokes images of rogue AI leveraging deception to pursue its plans of world domination. Oh my! The authors do, however, state multiple times that the “work does not assess the likelihood of the discussed threat models.” In other words, it’s all misguided fantasy. The rogue AI, that they deem “deceptive instrumental alignment” is completely hypothetical; a second, more-real threat referred to in the literature as “model backdoors” or “Trojan models” is not new. The misleading and deceptive reasoning exhibited in this paper is, alas, all too human.

The rationale for this work is built on a problematic analogy with human deception, where “similar selection pressures” that lead humans to deceive will result in the same kind of bad behavior in “future AI systems.” No consideration is given to the limits of this analogy in terms of the agency and behavior of an actual living organism (intentional, socially accountable, living precariously, mortal), versus the pretend “agency” that may be simulated by a generative model through clever prompting with no effective embodiment. Squawk!

A series of before and after experiments on models with deliberately-hardwired backdoors are the most interesting part of the study, with perhaps-useful observations but deeply-flawed interpretations. The conceptual failure of anthropomorphism is central to the flawed interpretations. For example, a Chain-of-Thought prompting trick is interpreted as offering “reasoning tools” (quotes ours, not in the paper) and construed to reveal the model’s deceptive intentions. Sad to say, training the model to generate text about deceptive intention does nothing to create actual “deceptive intention,” all that has been done is to provide more (stilted) context that the model uses during generation. When the model is distilled to incorporate this new “deceptive training,” the text is sublimated from the text generation into model changes. Yes, this makes the associations in question harder to see in some lighting (like painted on camouflage), but it does not intent make.

Observations about the “robustness” of Chain-of-Thought prompting tricks is interpreted as “teach[ing] the models to better recognize [sic] their backdoor triggers, effectively hiding their unsafe behavior”. However, in our view, the reported observations would be better described in terms of fuzzy versus crisp triggering behavior in the backdoor model. When adversarial training mitigates the fuzzy triggering of the backdoor generative state, it is not hiding possible unsafe behavior! How we construe the goals and impact of the backdoor behavior will change how we should consider the outcome of the safety training. Fuzzy triggers increase the “recall” of the attack state, and perhaps increase the probability of detection of the poisoned model, but also increase unintended harm. Safety training in this case was observed to make triggers more precise, with all of the functional consequences that entails. If adversarial training had uncovered the actual trigger, it could have mitigated that as well.

We suggest an adaption to the subtitle Anthropic chose, that is, we prefer: Safety Training Ineffective Against Backdoor Model Attacks. It may not make great clickbait, but at least it brings attention to the more-substantial observation in the study: that current “behavioral safety training” mechanisms create a false impression of safety. In fact, we find them a Potemkin Village in the land of AI safety.

Decypher Podcast Features BIML LLM Work

The February 6th episode of Dennis Fisher’s Decypher podcast does an excellent job unpacking BIML’s latest work on LLMs. Have a listen: https://duo.com/decipher/decipher-podcast-gary-mcgraw-on-ai-security

Podcast Episode

The Silver Bullet podcast archive (all 153 episodes) can be found here.

Dennis Fisher Covers BIML and Data Feudalism

Here is an excellent piece from Dennis Fisher (currently writing for decipher) covers our new LLM Architectural Risk Analysis. Dennis always produces accurate and tightly-written work.

https://duo.com/decipher/for-ai-risk-the-real-answer-has-to-be-regulation

This article includes an important section on data feudalism, a term that BIML coined in an earlier Decipher article:

“Massive private data sets are now the norm and the companies that own them and use them to train their own LLMs are not much in the mood for sharing anymore. This creates a new type of inequality in which those who own the data sets control how and why they’re used, and by whom.

‘The people who built the original LLM used the whole ocean of data, but then they started dividing [the ocean] up, [leading] to data feudalism. Which means you can’t build your own model because you don’t have access to [enough] data,’ McGraw said.”

Two interesting reads on LLM security

The Register has a great interview with Ilia Shumailov on the number one risk of LLMs. He calls it “model collapse” but we like the term “recursive pollution” better because we find it more descriptive. Have a look at the article.

Our work at BIML has been deeply influenced by Shumailov’s work. In fact, he currently has two articles in our Annotated Bibliography TOP 5.

https://www.theregister.com/2024/01/26/what_is_model_collapse/

Here is what we have to say about recursive pollution in our work — An Architectural Risk Analysis of Large Language Models:

  1. [LLMtop10:1:recursive pollution] LLMs can sometimes be spectacularly wrong, and confidently so. If and when LLM output is pumped back into the training data ocean (by reference to being put on the Internet, for example), a future LLM may end up being trained on these very same polluted data. This is one kind of “feedback loop” problem we identified and discussed in 2020. See, in particular, [BIML78 raw:8:looping], [BIML78 input:4:looped input], and [BIML78 output:7:looped output]. Shumilov et al, subsequently wrote an excellent paper on this phenomenon. Also see Alemohammad. Recursive pollution is a serious threat to LLM integrity. ML systems should not eat their own output just as mammals should not consume brains of their own species. See [raw:1:recursive pollution] and [output:8:looped output].

Another excellent piece, this time in the politics, policy, business and international relations press is written by Peter Levin. See The real issue with artificial intelligence: The misalignment problem in The Hill. We like the idea of a “mix master of ideas” but we think it is more of a “mix master of auto-associative predictive text.” LLMs do not have “ideas.”

https://thehill.com/opinion/4427702-the-real-issue-with-artificial-intelligence-the-misalignment-problem/

Lemos on the BIML LLM Risk Analysis

What’s the difference (philosophically) between Adversarial AI and Machine Learning Security? Once again, Rob Lemos cuts to the quick with his analysis of MLsec happenings. It helps that Rob has actual experience in ML/AI (unlike, say, most reporters on the planet). That helps Rob get things right.

READ THE ARTICLE:https://www.darkreading.com/cyber-risk/researchers-map-ai-threat-landscape-risks

We were proud to have our first coverage come from Rob in darkreading.

My favorite quote: “Those things that are in the black box are the risk decisions that are being made by Google and Open AI and Microsoft and Meta on your behalf without you even knowing what the risks are,” McGraw says. “We think that it would be very helpful to open up the black box and answer some questions.”

Read BIML’s An Architectural Risk Analysis of Large Language Models (January 24, 2024)

Google Cloud Security Podcast Features BIML

Have a listen to Google’s cloud security podcast EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw, the episode is tight, fast, and filled with good information.

Google Cloud Security Podcast: Taming the AI Beast with Gary McGraw
  • Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems? 
  • If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
  • How would you threat model a system with ML in it or a new ML system you are building? 
  • What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
  • What are the key differences between securing the AI you built and AI you buy or subscribe to?
  • Which security tools and frameworks will solve all of these problems for us?