Houston, we have a problem: Anthropic Rides an Artificial Wave

I’ll tip my hat to the new Constitution
Take a bow for the new revolution
Smile and grin at the change all around
Pick up my guitar and play
Just like yesterday
Then I’ll get on my knees and pray
We don’t get fooled again

Out there in the smoking rubble of the fourth estate, it is hard enough to cover cyber cyber. Imagine, then, piling on the AI bullshit. Can anybody cut through the haze? Apparently for the WSJ and the NY Times, the answer is no.

Yeah, it’s Anthropic again. This time writing a blog-post level document titled “Disrupting the first reported AI-orchestrated cyber espionage campaign” and getting the major tech press all wound around the axle about it.

The root of the problem here is that expertise in cyber cyber is rare AND expertise in AI/ML is rare…but expertise in both fields? Not only is it rare, but like hydrogen-7, which has a half-life of about 10^-24 seconds, it disappears pretty fast as both fields progress. Even superstar tech reporters can’t keep everything straight.

Lets start with the end. What question should the press have asked Anthropic about their latest security story? How about, “which parts of these attacks could ONLY be accomplished with agentic AI?” From our little perch at BIML, it looks like the answer is a resounding none.

Now that we know the ending, lets look at both sides of the beginning. Security first. Unfortunately, brute force, cloud-scale, turnkey software exploit is what has been driving the ransomware cybercrime wave for at least a decade now. All of the offensive security tool technology used by the attackers Anthropic describes is available as open source frameworks, leading experts like Kevin Beaumont to label the whole thing, “vibe usage of open source attack frameworks.” Would existing controls work against this? Apparently not for “a handful” of the thirty companies Anthropic claims were successfully attacked. LOL.

By now those of us old enough to know better than to call ourselves security experts have learned how to approach claims like the ones Anthropic is making skeptically. “Show me the logs,” we yell as we shake our canes in the air. Seriously. Where is the actual evidence? Who has seen it. Do we credulously repeat whatever security vendors tell us as it it is the gods’ honest truth? No we do not. Who was successfully attacked? Did the reporters chase them down? Who was on the list of 30?

AI second. It is all too easy to exaggerate claims in today’s superheated AI universe. One of the most trivial (and intellectually lazy) ways to do this is to use anthropomorphic language when we are describing what LLMs do. LLMs don’t “think” or “believe” or “have intentionality” like humans do. (FWIW, Anthropic is very much guilty of this and they are not getting any better.) LLMs do do a great job of role playing though. So dressing one up as a black hat nation state hacker and sending it lumbering off into the klieg lights is easy.

So who did it? How do we prove that beyond a reasonable doubt? Hilariously, the real attacks here appear to be asking an LLM to pretend to be a white hat red team member dressed in a Where’s Waldo shirt and weilding a SSRF attack. Wake me up when it’s over.

Ultimately, is this really the “first documented case of a cyberattack largely executed without human intervention at scale”…no, that was the script kiddies in the ’90s.

Lets be extremely clear here. Machine Learning Security is absolutely critical. We have lots of work to do. So lets ground ourselves in reality and get to it.

BIML granted official non-profit status

After an extensive year long process, the Berryville Institute of Machine Learning has been granted 501(c)3 status by the United States Internal Revenue Service. BIML is located at the foot of the Blue Ridge mountains on the banks of the Shenandoah river in Berryville, Virginia.

We are proud of the impact our work has made since we were founded in 2019, and we look forward to the wider engagement that non-profit status will allow us.

BIML in Brazil: mind the sec keynote

Here is a recording of the Thursday morning keynote at mind the sec. The conference was attended by 16,000 people. The main stage was in the middle of the show floor in the round. An interesting concept that made delivery non-trivial.

https://www.youtube.com/live/0HZ__sXdL04?si=5rmNvGnCQgyd-WE7&t=5031

BIML in São Paulo

In addition to keynoting mind the sec, Dr. McGraw spoke at University São Paulo.

You can watch the talk (delivered to 180 USP graduate students) here.

The in person portion of the audience…

Legit Webinar: swsec/appsec Meets AI/Development

Has application development changed because of AI? Yes it has. Fundamentally. What does this mean for software security? Liav Caspi, Legit CTO and BIML’s Gary McGraw discuss this important topic. Have a watch.

BIML on Cybersecurity Today

BIML was on a recent episode of Cybersecurity Today discussing “The Hidden Risks of LLMs: What Cybersecurity Pros Need to Know.” Have a watch.

Irius Risk Applies ML FOR Security While Practicing ML Security Itself

It all may seem a bit confusing, but really MLsec is about securing the ML stack itself. Kind of like software security is about securing software itself (while security software is something entirely different). Irius Risk has, for a number of years, included BIML knowledge of MLsec risks in its automated threat modeling tool. So they know plenty about MLsec.

As of early 2025, Irius Risk is also putting ML to work inside its security tools. On March 12th, we did a webinar together about the two tools Irius Risk has built that on applied ML: Jeff and Bex.

Have a watch.

For more videos in the series, see https://www.youtube.com/playlist?list=PLpo8W6wt_WV-haEOL-nWyz5TKhJOJ5Gao.

Revisiting Letter Spirit Thirty Years Later

(crossposted to apothecaryshed)

I was honored to be asked to present a talk on my thesis work in Nancy at the Automatic Type Design 3 conference. Though I certainly loved working on Letter Spirit, my thesis with Doug Hofstadter at Indiana University, in the years since I have been helping to establish the field of software security and working to make machine learning security a reality. So when I was asked to speak at a leading typography and design conference organized by Atelier national de recherche typographique / École nationale supérieure d’art et de design de Nancy, it came as a delightful surprise and an honor.

Scott Kim shows a picture I made thirty years ago, illustrating gridfonts.

Here is the abstract I ginned up:

ML/AI, Typographic Design, and the Four “I”s

During this talk I will touch on Intuition, Insight, and Inspiration. First I will set the context by introducing the Letter Spirit project and its microdomain — work I published exactly 30 years ago as a Ph.D. student of Doug Hofstadter’s. I will spend some time discussing the role of roles (and other mental structures) in creativity and human perception. Then I’ll take a quick run through the current state of “AI” (really ML) so we get a feeling of how LLMs actually work. We will talk about WHAT MACHINES and relate them to human cognition. Finally, just as I get around to intuition, insight, and inspiration, I will run out of time.

As you may already know, intuition, insight, and inspiration are all deeply human things missing from current AI/ML models. Thirty years ago at CRCC, we were exploring a theory of design and creativity steeped in a cognitive model of concepts and emergent perception that would exhibit all three. Needless to say, there is plenty of work to be done even thirty years later. The good news is that human designers have nothing to fear from recent advances in AI/ML. Yet, anyway.

For more videos from the conference, see https://automatic-type-design.anrt-nancy.fr/colloques/automatic-type-design-3

I am particularly interested in giving this talk to other interested audiences focused on design and creativity. Contact me through BIML.