BIML on Cybersecurity Today

BIML was on a recent episode of Cybersecurity Today discussing “The Hidden Risks of LLMs: What Cybersecurity Pros Need to Know.” Have a watch.

Irius Risk Applies ML FOR Security While Practicing ML Security Itself

It all may seem a bit confusing, but really MLsec is about securing the ML stack itself. Kind of like software security is about securing software itself (while security software is something entirely different). Irius Risk has, for a number of years, included BIML knowledge of MLsec risks in its automated threat modeling tool. So they know plenty about MLsec.

As of early 2025, Irius Risk is also putting ML to work inside its security tools. On March 12th, we did a webinar together about the two tools Irius Risk has built that on applied ML: Jeff and Bex.

Have a watch.

For more videos in the series, see https://www.youtube.com/playlist?list=PLpo8W6wt_WV-haEOL-nWyz5TKhJOJ5Gao.

Revisiting Letter Spirit Thirty Years Later

(crossposted to apothecaryshed)

I was honored to be asked to present a talk on my thesis work in Nancy at the Automatic Type Design 3 conference. Though I certainly loved working on Letter Spirit, my thesis with Doug Hofstadter at Indiana University, in the years since I have been helping to establish the field of software security and working to make machine learning security a reality. So when I was asked to speak at a leading typography and design conference organized by Atelier national de recherche typographique / École nationale supérieure d’art et de design de Nancy, it came as a delightful surprise and an honor.

Scott Kim shows a picture I made thirty years ago, illustrating gridfonts.

Here is the abstract I ginned up:

ML/AI, Typographic Design, and the Four “I”s

During this talk I will touch on Intuition, Insight, and Inspiration. First I will set the context by introducing the Letter Spirit project and its microdomain — work I published exactly 30 years ago as a Ph.D. student of Doug Hofstadter’s. I will spend some time discussing the role of roles (and other mental structures) in creativity and human perception. Then I’ll take a quick run through the current state of “AI” (really ML) so we get a feeling of how LLMs actually work. We will talk about WHAT MACHINES and relate them to human cognition. Finally, just as I get around to intuition, insight, and inspiration, I will run out of time.

As you may already know, intuition, insight, and inspiration are all deeply human things missing from current AI/ML models. Thirty years ago at CRCC, we were exploring a theory of design and creativity steeped in a cognitive model of concepts and emergent perception that would exhibit all three. Needless to say, there is plenty of work to be done even thirty years later. The good news is that human designers have nothing to fear from recent advances in AI/ML. Yet, anyway.

For more videos from the conference, see https://automatic-type-design.anrt-nancy.fr/colloques/automatic-type-design-3

I am particularly interested in giving this talk to other interested audiences focused on design and creativity. Contact me through BIML.

McGraw Talks to Lemos About ML Security Research

Veteran tech reporter Rob Lemos had a few questions for BIML regarding ML security hacking (aka red teaming) and the DMCA. Here is the resulting darkreading article. See the original questions which flesh out BIML’s position more clearly below.

Lemos: I’m familiar with attempts to use DMCA to stifle security research in general. What are the dangers specifically to AI security and safety researchers? Have there been any actual legal cases where a DMCA violation was alleged against a AI researcher?

BIML: I am unaware of DMCA cases against MLsec researchers.

I will state for the record that we are not going to red team or pen test out way to AI trustworthiness. The real way to secure ML is at the design level with a strong focus on training data, representation, and evaluation. Pen testing has high sex appeal but limited effectiveness.

As designed today, ML systems have flaws that can be exposed by hacking but not fixed by hacking. See the BIML work for a whole host of risks to think about.

The biggest challenge for MLsec researchers is overcoming hype and disinformation from vendors. In my view the red teaming stuff plays right into this by focusing attention on marginally relevant things.

Lemos: What changes are the US Copyright Office experts considering to support AI security and safety research? How do these changes redraw the current line between allowed and illegal research?

BIML:The changes appear to be merely cosmetic. The high irony is that AI vendors themselves are subject to many suits involving misuse of copyrighted material during training. My prediction is that we will need clarity there first before we fret about “hacking.”

Note that there are some very good scientists working on real MLsec (see the BIML bibliography top 5 for some stellar examples).

Lemos: Are AI companies pushing back against these efforts? Do you think we will end up with a “reasonable” approach (for whatever definition of reasonable you want)?

Most AI vendors seem to acknowledge various problems and then downplay them with cutesy names like “hallucinate” instead of “WRONG” and “prompt injection” instead of “broken, ill-formed, under-defined natural language API.”

We need to clean up the black box itself (and it’s interfaces), not try to constrain bad black box behavior with an infinite set of filters. In security engineering we know all about the futility of black listing vs white listing…but black list filtering seems to be the current sanctioned approach.

Lemos: From your viewpoint, should we err on the side of protecting innovation (i.e., first movers) or on the side of ensuring safety and security? Why?

BIML: Security is always in a tradeoff against functionality. The glib answer is, “both.” We should not shut down security engineering and testing, but we should also not throw out the ML baby with the security bathwater.

If your enterprise is adopting ML for its powerful features, make sure to do a proper in-depth threat model of your system and cover your risks with appropriate controls.

BIML Speaks at CCSC Eastern

As independent scholars, we have a huge amount of respect for professors and students of Computer Science at small colleges in the United States. We were proud to participate as the dinner speaker at the CCSC Eastern Conference this year.

Our payment was a cool T-shirt and some intellectual stimulation. (Now you know why McGraw never takes selfies.)

One time student of mine at Earlham College, one time employee of mine at Cigital, and now the infamous daveho (author of Find Bugs).

A visit to IU Bloomington

Sometimes it pays to stop and think, especially if you can surround yourself with some exceptional grad students. On the way to Rose-Hulman, BIML made a pit stop in Bloomington for a dinner focused on two papers: Vaswani’s 2017 Attention is All You Need (defining the transformer architecture) also see https://berryvilleiml.com/bibliography/ and Dennis “the antecedents of transformer models” (which will appear in Current Directions in Psychological Science soon.

The idea was to explore and critique the architectural decisions underlying the Transformer architecture. Bottom line? Most of them were made for efficiency reasons. There is lots of room for better cognitively-inspired ML. Maybe efficiency is NOT all you need.

We did this all over delicious Korean food at Hoosier Seoulmate.

Special thanks to Rob Goldstone who provided the Dennis manuscript and grounded the cognitive psychology thread and to Eli McGraw who conjured up the dinner from thin air.

The Lake Monroe home away from home.

Invited Talk at Rose-Hulman Institute of Technology

Dr. McGraw gave a talk Wednesday 10/16/24 at Rose-Hulman in Terre Haute, Indiana. This version of the talk is aimed at Computer Science students. There were some very good questions.

Calypso Dublin Panel Features BIML

Here is a video of the Dublin panel recorded October 3rd 2024. This was quite an excellent event. Have a watch.