Dennis Fisher Covers BIML and Data Feudalism

Here is an excellent piece from Dennis Fisher (currently writing for decipher) covers our new LLM Architectural Risk Analysis. Dennis always produces accurate and tightly-written work.

https://duo.com/decipher/for-ai-risk-the-real-answer-has-to-be-regulation

This article includes an important section on data feudalism, a term that BIML coined in an earlier Decipher article:

“Massive private data sets are now the norm and the companies that own them and use them to train their own LLMs are not much in the mood for sharing anymore. This creates a new type of inequality in which those who own the data sets control how and why they’re used, and by whom.

‘The people who built the original LLM used the whole ocean of data, but then they started dividing [the ocean] up, [leading] to data feudalism. Which means you can’t build your own model because you don’t have access to [enough] data,’ McGraw said.”

Two interesting reads on LLM security

The Register has a great interview with Ilia Shumailov on the number one risk of LLMs. He calls it “model collapse” but we like the term “recursive pollution” better because we find it more descriptive. Have a look at the article.

Our work at BIML has been deeply influenced by Shumailov’s work. In fact, he currently has two articles in our Annotated Bibliography TOP 5.

https://www.theregister.com/2024/01/26/what_is_model_collapse/

Here is what we have to say about recursive pollution in our work — An Architectural Risk Analysis of Large Language Models:

  1. [LLMtop10:1:recursive pollution] LLMs can sometimes be spectacularly wrong, and confidently so. If and when LLM output is pumped back into the training data ocean (by reference to being put on the Internet, for example), a future LLM may end up being trained on these very same polluted data. This is one kind of “feedback loop” problem we identified and discussed in 2020. See, in particular, [BIML78 raw:8:looping], [BIML78 input:4:looped input], and [BIML78 output:7:looped output]. Shumilov et al, subsequently wrote an excellent paper on this phenomenon. Also see Alemohammad. Recursive pollution is a serious threat to LLM integrity. ML systems should not eat their own output just as mammals should not consume brains of their own species. See [raw:1:recursive pollution] and [output:8:looped output].

Another excellent piece, this time in the politics, policy, business and international relations press is written by Peter Levin. See The real issue with artificial intelligence: The misalignment problem in The Hill. We like the idea of a “mix master of ideas” but we think it is more of a “mix master of auto-associative predictive text.” LLMs do not have “ideas.”

https://thehill.com/opinion/4427702-the-real-issue-with-artificial-intelligence-the-misalignment-problem/

Lemos on the BIML LLM Risk Analysis

What’s the difference (philosophically) between Adversarial AI and Machine Learning Security? Once again, Rob Lemos cuts to the quick with his analysis of MLsec happenings. It helps that Rob has actual experience in ML/AI (unlike, say, most reporters on the planet). That helps Rob get things right.

READ THE ARTICLE:https://www.darkreading.com/cyber-risk/researchers-map-ai-threat-landscape-risks

We were proud to have our first coverage come from Rob in darkreading.

My favorite quote: “Those things that are in the black box are the risk decisions that are being made by Google and Open AI and Microsoft and Meta on your behalf without you even knowing what the risks are,” McGraw says. “We think that it would be very helpful to open up the black box and answer some questions.”

Read BIML’s An Architectural Risk Analysis of Large Language Models (January 24, 2024)

Google Cloud Security Podcast Features BIML

Have a listen to Google’s cloud security podcast EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw, the episode is tight, fast, and filled with good information.

Google Cloud Security Podcast: Taming the AI Beast with Gary McGraw
  • Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems? 
  • If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
  • How would you threat model a system with ML in it or a new ML system you are building? 
  • What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
  • What are the key differences between securing the AI you built and AI you buy or subscribe to?
  • Which security tools and frameworks will solve all of these problems for us? 

All Your LLM Are Belong to Us

We didn’t want to rain on the Davos parade, so we waited until this week to release our latest piece of work. Our paper “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” spotlights what we view as major concerns with foundation model LLMs as well as their adaptations and applications.

We are fans of ML and “AI” (which the whole world tilted towards in 2023, fawning over the latest models with both awe and apprehension). We’re calling out the inherent risks. Not hand wavy stuff—we’ve spent the past year reading science publications, dissecting the research ideas, understanding the math, testing models, parsing through the noise, and ultimately analyzing LLMs through the lens of security design and architecture. We took the tool we invented for ML security risk analysis in 2020 (see our earlier paper, “Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning”) and applied it to LLMs specifically.

We found 81 risks overall, distilled a Top Ten (Risks) list, and shined a spotlight on 23 critical risks inherent in the black box LLM foundation models.

And now 2024 is off and running. It will be the year of “AI Governance” in name and (optimistic) intent. In practice, however, it’s on pace to be a shitshow for democracy as regulators run like hell just to get to the starting line.

The Slovak parliamentary election deepfake debacle, is the tip of the iceberg. OpenAI tried to get ahead of concerns that their technology may be used to influence the US Presidential Election in nefarious ways by posting its plans to deter misinformation. The irony is that OpenAI trained its models on a corpus so large that it holds vast globs of crazy rhetoric, conspiracy theories, fake news, and other pollution which its stochastic models will draw upon and (predictably) spit out…that will, in turn, add to the ever amassing pile of garbage-strewn data in the world, which future LLM foundation models will ingest, … See the problem here? That’s recursive pollution.

It’s the Data, stupid. (We sure wish it were that simple, anyway.)

See our official Press Release here.

Another Round of “Adversarial Machine Learning” from NIST

The National Institute of Standards and Technology (aka NIST) recently released a paper enumerating many attacks relevant to AI system developers. With the seemingly-unending rise in costs incurred by cybercrime, it’s sensible to think through the means and motives behind these attacks. NIST provides good explanations of the history and context for a variety of AI attacks in a partially-organized laundry list. That’s a good thing. However, in our view, NIST’s taxonomy lacks a useful structure for thinking about and categorizing systemic AI risks. We released a simple (and hopefully more effective) taxonomy of ML attacks in 2019 that divides attacks into two types—extraction and manipulation—and further divides these types into the three most common attack surfaces found in all ML systems—the model, the (training) data, and the (runtime) inputs. That move yields a six category taxonomy.

But wait, there’s more… Attacks represent only a small portion of security risks present in AI systems. NIST’s attack taxonomy doesn’t have any room for serious (non-attack-related) concerns such as recursive pollution or improper use of AI technology for tasks it wasn’t designed for. Far too much of NIST’s evaluation of generative AI is dedicated to prompt injection attacks, where an attacker manipulates the prompt provided to the LLM at runtime, producing undesirable results. LLM developers certainly need to consider the potential for malicious prompts (or malicious input as computer security people have always called it), but this downplays a much more important risk—stochastic behavior from LLM foundation models can be wrong and bad all by itself without any clever prompting!

At BIML, we are chiefly concerned with building security in to ML systems—a fancy way of saying security engineering. By contrast, NIST’s approach encourages “red-teaming”, using teams of ethical hackers (or just people off the street and pizza delivery guys) to try to penetration test LLM systems based on chugging down a checklist of known problems. Adopting this “outside–>in” paradigm of build (a broken thing)-break-fix will inevitably overlook huge security chasms inside the system—holes that are ripe for attackers to exploit. Instead of trying to test your way toward security one little prompt at a time (which turns out to be insanely expensive), why not build systems properly in the first place through a comprehensive overview of systemic risks?!

In any case, we would like to see appropriate regulatory action to ensure that proper security engineering takes place (including, say, documenting exactly where those training data came from and what they contain). We don’t think enlisting an army of pizza guys providing prompts is the answer,In the meantime, AI systems are already being made available to the public, and they are already wreaking havoc. Consider, for example, the recent misuse of AI to suppress voter turnout in the New Hampshire presidential primary! This kind of thing should shock the conscience of any who believe AI security can be tested in as an afterthought. So we have a call to action for you. It is imperative that AI architects and thought leaders adopt a risk-driven approach to engineering secure systems before releasing them to the public.

Bottom line on the NIST attack list? Mostly harmless.

Our Secret BIML Strategy

Dang. Darkreading went and published our world domination plan for machine learning securiy

To properly secure machine learning, the enterprise needs to be able to do three things: find where machine learning is being used, threat model the risk based on what was found, and put in controls to manage those risks.

‘We need to find machine learning [and] do a threat model based on what you found,’ McGraw says. ‘You found some stuff, and now your threat model needs to be adjusted. Once you do your threat model and you’ve identified some risks and threats, you need to put in some controls right across all those problems.’

There is no one tool or platform that can handle all three things, but McGraw happens to be on the advisory boards for three companies corresponding to each of the areas. Legit Security finds everything, IriusRisk helps with threat modeling, and Calypso AI puts controls in place.

‘I can see all the parts moving,’ McGraw says. ‘All the pieces are coming together.'”

Ah the Trinity of MLsec explained! Read the article here.

First Things First: Find AI

Apparently there are many CISOs out there who believe that their enterprise policies prohibit the use of ML, LLMS, and AI in their organization. Little do they know what’s actually happening.

This excellent article by darkreading discusses the first thing a CISO should do to secure AI: Find AI. The system described here is implemented by Legit Security.

MLsec in Rio

BIML provided a preview of our upcoming LLM Risk Analysis work (including the top ten LLM risks) at a Philosophy of Mind workshop in Rio de Janeiro January 5th. The workshop was organized by David Chalmers (NYU) and Laurie Paul (Yale).

McGraw in Rio

A tongue-in-cheek posting about the meeting can be found here.

Threat Modeling for ML

Once you learn that many of your new applications have ML built into them (often regardless of policy), what’s the next step? Threat modeling, of course. Irius Risk, the worldwide leader for threat modeling automation, announced a threat modeling library covering ML risks identified by BIML on October 26, 2023.

This is the first tool in the world to include ML risk as part of threat modeling automation. Now we’re getting somewhere.

Darkreading was the first publication to cover the news, and remains the best source for cutting edge stories about machine learning security. Read the original story here.

https://www.darkreading.com/cyber-risk/iriusrisk-brings-threat-modeling-to-machine-learning