It all may seem a bit confusing, but really MLsec is about securing the ML stack itself. Kind of like software security is about securing software itself (while security software is something entirely different). Irius Risk has, for a number of years, included BIML knowledge of MLsec risks in its automated threat modeling tool. So they know plenty about MLsec.
As of early 2025, Irius Risk is also putting ML to work inside its security tools. On March 12th, we did a webinar together about the two tools Irius Risk has built that on applied ML: Jeff and Bex.
I was honored to be asked to present a talk on my thesis work in Nancy at the Automatic Type Design 3 conference. Though I certainly loved working on Letter Spirit, my thesis with Doug Hofstadter at Indiana University, in the years since I have been helping to establish the field of software security and working to make machine learning security a reality. So when I was asked to speak at a leading typography and design conference organized by Atelier national de recherche typographique / École nationale supérieure d’art et de design de Nancy, it came as a delightful surprise and an honor.
Scott Kim shows a picture I made thirty years ago, illustrating gridfonts.
Here is the abstract I ginned up:
ML/AI, Typographic Design, and the Four “I”s
During this talk I will touch on Intuition, Insight, and Inspiration. First I will set the context by introducing the Letter Spirit project and its microdomain — work I published exactly 30 years ago as a Ph.D. student of Doug Hofstadter’s. I will spend some time discussing the role of roles (and other mental structures) in creativity and human perception. Then I’ll take a quick run through the current state of “AI” (really ML) so we get a feeling of how LLMs actually work. We will talk about WHAT MACHINES and relate them to human cognition. Finally, just as I get around to intuition, insight, and inspiration, I will run out of time.
As you may already know, intuition, insight, and inspiration are all deeply human things missing from current AI/ML models. Thirty years ago at CRCC, we were exploring a theory of design and creativity steeped in a cognitive model of concepts and emergent perception that would exhibit all three. Needless to say, there is plenty of work to be done even thirty years later. The good news is that human designers have nothing to fear from recent advances in AI/ML. Yet, anyway.
Veteran tech reporter Rob Lemos had a few questions for BIML regarding ML security hacking (aka red teaming) and the DMCA. Here is the resulting darkreading article. See the original questions which flesh out BIML’s position more clearly below.
Lemos: I’m familiar with attempts to use DMCA to stifle security research in general. What are the dangers specifically to AI security and safety researchers? Have there been any actual legal cases where a DMCA violation was alleged against a AI researcher?
BIML: I am unaware of DMCA cases against MLsec researchers.
I will state for the record that we are not going to red team or pen test out way to AI trustworthiness. The real way to secure ML is at the design level with a strong focus on training data, representation, and evaluation. Pen testing has high sex appeal but limited effectiveness.
As designed today, ML systems have flaws that can be exposed by hacking but not fixed by hacking. See the BIML work for a whole host of risks to think about.
The biggest challenge for MLsec researchers is overcoming hype and disinformation from vendors. In my view the red teaming stuff plays right into this by focusing attention on marginally relevant things.
Lemos: What changes are the US Copyright Office experts considering to support AI security and safety research? How do these changes redraw the current line between allowed and illegal research?
BIML:The changes appear to be merely cosmetic. The high irony is that AI vendors themselves are subject to many suits involving misuse of copyrighted material during training. My prediction is that we will need clarity there first before we fret about “hacking.”
Note that there are some very good scientists working on real MLsec (see the BIML bibliography top 5 for some stellar examples).
Lemos:Are AI companies pushing back against these efforts? Do you think we will end up with a “reasonable” approach (for whatever definition of reasonable you want)?
Most AI vendors seem to acknowledge various problems and then downplay them with cutesy names like “hallucinate” instead of “WRONG” and “prompt injection” instead of “broken, ill-formed, under-defined natural language API.”
We need to clean up the black box itself (and it’s interfaces), not try to constrain bad black box behavior with an infinite set of filters. In security engineering we know all about the futility of black listing vs white listing…but black list filtering seems to be the current sanctioned approach.
Lemos:From your viewpoint, should we err on the side of protecting innovation (i.e., first movers) or on the side of ensuring safety and security? Why?
BIML: Security is always in a tradeoff against functionality. The glib answer is, “both.” We should not shut down security engineering and testing, but we should also not throw out the ML baby with the security bathwater.
If your enterprise is adopting ML for its powerful features, make sure to do a proper in-depth threat model of your system and cover your risks with appropriate controls.
BIML keynotes RVA Tech’s annual Women in Tech event, Richmond, VA
Richmond’s thriving tech community came together in force for the Richmond Technology Council’s rvatech/tech’s annual Women in Tech event. BIML’s Katie McMahon delivered the opening keynote address to a packed audience at the Dewey Gottwald Center. This year’s event saw record attendance, drawing engineers, data scientists, cybersecurity specialists, CIOs, CTOs, entrepreneurs, product leaders, members of the state administration, and representatives from the Governor’s AI Task Force.
In her keynote, McMahon broadly addressed the topic of “AI Overwhelm,” and delved into BIML’s architectural risk analysis of large language models (LLMs). She left the audience more aware of the risks associated with LLMs, while also encouraging a thoughtful and mindful approach to building products and services.
McMahon also spoke about “GenV” – Generation Voice – a term she coined in 2015 to describe those born from 2010 onward, the first digital native generation to grow up accustomed to speaking with computers. With the meteoric rise of ChatGPT and other increasingly sophisticated LLM chatbots, the implications for the user experience are far-reaching, as users increasingly leverage voice interactions and natural conversational ‘dialogue’ with computing power.
The audience was exceptional, fully attentive, engaged, and receptive to the keynote. One of the biggest compliments came from two data scientists who were scheduled to present a technical workshop later in the day, titled “Clearing the Fog: Diagnosing Hallucinations in your LLMs.” After the keynote, they approached McMahon and enthusiastically announced, “We loved your talk! And we are now going to start calling it ‘Wrongness’!”
This year’s Women in Tech event at RVA Tech once again demonstrated the depth and breadth of talent within Richmond’s thriving tech ecosystem. The keynote address that Katie presented provided thought-provoking insights into the challenges and opportunities presented by the rapid advancements in artificial intelligence.
Note: Katie would like to expressly thank rva/tech and their wonderful planning committee, including Emily Mercer (pictured in photo with Katie), Chris Burroughs and Jazmyn Ward, for producing a wonderful event and graciously hosting her.
As independent scholars, we have a huge amount of respect for professors and students of Computer Science at small colleges in the United States. We were proud to participate as the dinner speaker at the CCSC Eastern Conference this year.
Our payment was a cool T-shirt and some intellectual stimulation. (Now you know why McGraw never takes selfies.)
One time student of mine at Earlham College, one time employee of mine at Cigital, and now the infamous daveho (author of Find Bugs).
Sometimes it pays to stop and think, especially if you can surround yourself with some exceptional grad students. On the way to Rose-Hulman, BIML made a pit stop in Bloomington for a dinner focused on two papers: Vaswani’s 2017 Attention is All You Need (defining the transformer architecture) also see https://berryvilleiml.com/bibliography/ and Dennis “the antecedents of transformer models” (which will appear in Current Directions in Psychological Science soon.
The idea was to explore and critique the architectural decisions underlying the Transformer architecture. Bottom line? Most of them were made for efficiency reasons. There is lots of room for better cognitively-inspired ML. Maybe efficiency is NOT all you need.
We did this all over delicious Korean food at Hoosier Seoulmate.
Special thanks to Rob Goldstone who provided the Dennis manuscript and grounded the cognitive psychology thread and to Eli McGraw who conjured up the dinner from thin air.
The Lake Monroe home away from home.
Invited Talk at Rose-Hulman Institute of Technology
Dr. McGraw gave a talk Wednesday 10/16/24 at Rose-Hulman in Terre Haute, Indiana. This version of the talk is aimed at Computer Science students. There were some very good questions.
BIML co-founder Gary McGraw joins an esteemed panel of experts to discuss Machine Learning Security in Dublin Thursday October 3rd. Participation requires registration. Please join us if you are in the area.