MLsec Musings
-
Richmond’s thriving tech community came together in force for the Richmond Technology Council’s rvatech/tech’s annual Women in Tech event. BIML’s Katie McMahon delivered the opening keynote address to a packed audience at the Dewey Gottwald Center. This year’s event saw record attendance, drawing engineers, data scientists, cybersecurity specialists, CIOs, CTOs, entrepreneurs, product leaders, members of the state administration, and representatives from the Governor’s AI Task Force.
In h...
-
As independent scholars, we have a huge amount of respect for professors and students of Computer Science at small colleges in the United States. We were proud to participate as the dinner speaker at the CCSC Eastern Conference this year.
Our payment was a cool T-shirt and some intellectual stimulation. (Now you know why McGraw never takes selfies.)
One time student of mine at Earlham College, one time employee of mine at Cigital, and now the infamous daveho (author of Find Bugs).
-
Sometimes it pays to stop and think, especially if you can surround yourself with some exceptional grad students. On the way to Rose-Hulman, BIML made a pit stop in Bloomington for a dinner focused on two papers: Vaswani’s 2017 Attention is All You Need (defining the transformer architecture) also see https://berryvilleiml.com/bibliography/ and Dennis “the antecedents of transformer models” (which will appear in Current Directions in Psychological Science soon.
The idea was to explor...
-
Dr. McGraw gave a talk Wednesday 10/16/24 at Rose-Hulman in Terre Haute, Indiana. This version of the talk is aimed at Computer Science students. There were some very good questions.
-
Here is a video of the Dublin panel recorded October 3rd 2024. This was quite an excellent event. Have a watch.
-
BIML co-founder Gary McGraw joins an esteemed panel of experts to discuss Machine Learning Security in Dublin Thursday October 3rd. Participation requires registration. Please join us if you are in the area.
-
On the back of our LAWFARE piece on data feudalism, McGraw did a video podcast with Decipher and his old friend Dennis Fisher. Have a look.
-
Dan Geer came across this marketing thingy and sent it over. It serves to remind us that when it comes to ML, it’s all about the data.
Take a look at this LAWFARE article we wrote with Dan about data feudalism.
Welcome to the era of data feudalism. Large language model (LLM) foundation models require huge oceans of data for training—the more data trained upon, the better the result. But while the massive data collections began as a straightforward harvesting of public observables,...
-
BIML coined the term data feudalism in our LLM Risks document (which you should read). Today, after a lengthy editing cycle, LAWFARE published an article co-authored by McGraw, Dan Geer, and Harold Figueroa. Have a read, and pass it on.
https://www.lawfaremedia.org/article/why-the-data-ocean-is-being-sectioned-off
-
BIML Livestream 7/11/24 2pm EST: Deciphering AI: Unpacking the Impact on Cybersecurity
—10 July 2024
BIML enthusiasts may be interested in this, which co-founder Gary McGraw participated in.
Deciphering AI: Unpacking the Impact on Cybersecurity By Lindsey O’Donnell-Welch
Also features Phil Venables CISO of Google Cloud and Nathan Hamiel from Blackhat.
Here’s the Decipher landing page where the event tomorrow will be livestreamed: https://duo.com/decipher/deciphering-ai-unpacking-the-impact-on-cybersecurity
It will also be livestreamed on the Decipher LinkedIn page: https://www...
-
In May we were invited to present our work to a global audience of Google engineers and scientists working on ML. Security people also participated. The talk was delivered via video and hosted by Google Zurich.
A few hundred people participated live. Unfortunately, though the session was recorded on video, Google has requested that we not post the video. OK Google. You do know what we said about you is what we say to everybody about you. Whatever. LOL.
Here is the talk abstr...
-
BIML turned out in force for a version of the LLM Risks presentation at ISSA NoVa.
BIML showed up in force (that is, all of us). We even dragged along a guy from Meta.
The ISSA President presents McGraw with an ISSA coin.
Though we appreciate Microsoft sponsoring the ISSA meeting and lending some space in Reston, here is what BIML really thinks about Microsoft’s approach to what they call “Adversarial AI.”
No really. You can’t even begin to pretend that “red...
-
BIML wrote an article for IEEE Computer describing 23 Black Box Risks found in LLM Foundation models. In our view, these risks determine perfect targets for government regulation of LLMs. Have a read. You can also fetch the article from the IEEE.
-
CalypsoAI produced a video interview in which I hosted Jim Routh and Neil Serebryany. We talked all about AI/ML security at the enterprise level. The conversation is great. Have a listen.
I am proud to be an Advisor to CalyopsoAI, have a look at their website.
-
Dr. McGraw recently visited Stockholm, Oslo, and Bergen, hosting events in all three cities.
In Stockholm, a video interview was added in addition to a live breakfast presentation. Here are some pictures of the presenter’s view of the video shoot.
Reactions were scary!
The talk in Oslo was packed, with lots of BIML friends in the audience.
Bergen had a great turnout too, with a very interactive audience including academics from the university.
... -
BIML’s work was featured in a April 5th talk at the Luddy Center for Artificial Intelligence, part of Indiana University.
Here is the talk abstract. If you or your organization are interested in hosting this talk, please let us know.
10, 23, 81 — Stacking up the LLM Risks: Applied Machine Learning Security
I present the results of an architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks previously i...
-
A recently-released podcast features a in-depth discussion of BIML’s recent LLM Risk Analysis, defining terms in easy to understand fashion. We cover what exactly a RISK IS, whether open source LLMs make any sense, how big BIG DATA really is, and more.
Have a listen here https://targetingai.podbean.com/e/security-bias-risks-are-inherent-in-genai-black-box-models/
-
To watch the video on TheInsurereTV website, click here.
-
This wide ranging interview starts with a brief history lesson and dives deep into BIML’s LLM Risk Analysis. Have a read, pass it on, and most importantly read the report.
-
The first public presentation of BIML’s LLM work was presented in San Diego February 26th as an invited talk for three conference workshops (simultaneously). The workshops coincided with NDSS. All NDSS ’24 workshops: https://www.ndss-symposium.org/ndss2024/co-located-events/
-
Have a listen as Paul Roberts digs deep into BIML’s work on machine learning security. What exactly is data feudalism? Why does it matter? What are the biggest risks associated with LLMs?
See the BIML LLM risk analysis (released 1.24.24 under the creative commons).
Or watch the podcast on Youtube…
-
Air Canada is learning the hard way that when YOUR chatbot on YOUR website is wrong, YOU pay the price. This is as it should be. This story from CTV News is a great development.
BIML warned about this in our LLM Risk Analysis report published 1.24.24. In particular, see:
[LLMtop10:9:model trustworthiness] Generative models, including LLMs, include output sampling algorithms by their very design. Both input (in the form of slippery natural language prompts) and generated output ...
-
The More Things Change, the More They Stay The Same: Defending Against Vulnerabilities you Create
—14 February 2024
Regarding the AP wire story out this morning (which features a quote by BIML):
Like any tool that humans have created, LLMs can be repurposed to do bad things. The biggest danger that LLMs pose in security is that they can leverage the ELIZA effect to convince gullible people into believing they are thinking and understanding things. This makes them particularly interesting in attacks that involve what security people call “spoofing.” Spoofing is important enough as an attack categ...
-
And in the land where I grew up
Into the bosom of technology
I kept my feelings to myself
Until the perfect moment comes
-David ByrneFrom its very title—Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training—you get the first glimpse of the anthropomorphic mish-mosh interpretation of LLM function that infects this study. Further on, any doubts about this deeply-misguided line of reasoning (and its detrimental effects on actual work in ...
-
The February 6th episode of Dennis Fisher’s Decypher podcast does an excellent job unpacking BIML’s latest work on LLMs. Have a listen: https://duo.com/decipher/decipher-podcast-gary-mcgraw-on-ai-security
Podcast Episode The Silver Bullet podcast archive ...
-
Here is an excellent piece from Dennis Fisher (currently writing for decipher) covers our new LLM Architectural Risk Analysis. Dennis always produces accurate and tightly-written work.
This article includes an important section on data feudalism, a term that BIML coined in an earlier Decipher article:
“Massive private data sets are now the norm and the companies that own them and use them to train their ...
-
The Register has a great interview with Ilia Shumailov on the number one risk of LLMs. He calls it “model collapse” but we like the term “recursive pollution” better because we find it more descriptive. Have a look at the article.
Our work at BIML has been deeply influenced by Shumailov’s work. In fact, he currently has two articles in our Annotated Bibliography TOP 5.
Here is what we have to say about recursive pollu...
-
What’s the difference (philosophically) between Adversarial AI and Machine Learning Security? Once again, Rob Lemos cuts to the quick with his analysis of MLsec happenings. It helps that Rob has actual experience in ML/AI (unlike, say, most reporters on the planet). That helps Rob get things right.
We were proud to have our first coverage come from Rob in darkreading.
My favorite quot...
-
Have a listen to Google’s cloud security podcast EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw, the episode is tight, fast, and filled with good information.
- Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
- If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance pr...
-
We didn’t want to rain on the Davos parade, so we waited until this week to release our latest piece of work. Our paper “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” spotlights what we view as major concerns with foundation model LLMs as well as their adaptations and applications.
We are fans of ML and “AI” (which the whole world tilted towards in 2023, fawning over the latest models with both awe and apprehension). We’re calling out the...
-
The National Institute of Standards and Technology (aka NIST) recently released a paper enumerating many attacks relevant to AI system developers. With the seemingly-unending rise in costs incurred by cybercrime, it’s sensible to think through the means and motives behind these attacks. NIST provides good explanations of the history and context for a variety of AI attacks in a partially-organized laundry list. That’s a good thing. However, in our view, NIST’s taxonomy lacks a useful structu...
-
Dang. Darkreading went and published our world domination plan for machine learning securiy
“To properly secure machine learning, the enterprise needs to be able to do three things: find where machine learning is being used, threat model the risk based on what was found, and put in controls to manage those risks.
‘We need to find machine learning [and] do a threat model based on what you found,’ McGraw says. ‘You found some stuff, and now your threat model needs to be adjusted. ...
-
Apparently there are many CISOs out there who believe that their enterprise policies prohibit the use of ML, LLMS, and AI in their organization. Little do they know what’s actually happening.
This excellent article by darkreading discusses the first thing a CISO should do to secure AI: Find AI. The system described here is implemented by Legit Security.
-
BIML provided a preview of our upcoming LLM Risk Analysis work (including the top ten LLM risks) at a Philosophy of Mind workshop in Rio de Janeiro January 5th. The workshop was organized by David Chalmers (NYU) and Laurie Paul (Yale).
A tongue-in-cheek posting about the meeting can be found here.
-
Once you learn that many of your new applications have ML built into them (often regardless of policy), what’s the next step? Threat modeling, of course. Irius Risk, the worldwide leader for threat modeling automation, announced a threat modeling library covering ML risks identified by BIML on October 26, 2023.
This is the first tool in the world to include ML risk as part of threat modeling automation. Now we’re getting somewhere.
Darkreading was the first publication to cover the n...
-
Decipher covers the White House AI Executive Order, with the last word to BIML. Read the article from October 31, 2023 here.
Much of what the executive order is trying to accomplish are things that the software and security communities have been working on for decades, with limited success.
“We already tried this in security and it didn’t work. It feels like we already learned this lesson. It’s t...
-
BIML was invited to Oslo to present its views on Machine Learning Security in two presentations at NBIM in October.
The first was delivered to 250+ technologists on staff (plus 25 or so invited guests from all around Norway). During the talk, BIML revealed its “Top Ten LLM Risks” data for the first time (pre-publication).
The second session was a fireside chat for 19 senior executives.
-
This positioning of the red teaming article is much better than the original AP story. Here is a pointer to the Fortune article from August 13, 2023.
-
The idea that machine learning security is exclusively about “hackers,” “attacks,” or some other kinds of “adversary,” is misguided. This is the same sort of philosophy that misled software security into a myopic overfocus on penetration testing way back in the mid ’90s. Not that pen testing and red teaming are useless, mind you, but there is way more to security engineering that penetrate and patch. It took us forever (well, a decade or more) to get past the pen test puppy love and start...
-
Synopsys has a new podcast focused on building security in. Episode two features BIML’s own Gary McGraw discussing building security into Machine Learning.
Have a listen.
-
We are extremely pleased to announce that Katie McMahon has joined BIML as a permanent researcher.
Katie McMahon is a global entrepreneur and technology executive who has been at the leading edge of sound recognition and natural language understanding technologies for the past 20 years. As VP at Shazam, she brought the iconic music recognition app to market which went on to reach 2 billion installs and 70 billion queries (Acquired by Apple) and spent over a decade at Soun...
-
BIML just published a short popular press article on LLM risks. Have a look.
-
As the world is rapidly advancing technologically, it is vital to understand the implications and opportunities presented by Large Language Models (LLMs) in the realm of national security and beyond. This discussion will bring together leading experts from various disciplines to share insights on the risks, ethical considerations, and potential benefits of utilizing LLMs for intelligence, cybersecurity, and other applications.
-
Irius Risk, a company specializing in automating threat modeling for software security, hosted a webinar on Machine Learning and Threat Modeling March 30, 2023. BIML CEO Gary McGraw participated in the webinar along with Adam Shostack.
The webinar was recorded and you can watch here. FWIW, we are still not exactly clear on Adam’s date of replacement.
-
Every bunch of years, the National Science Foundation holds vision workshops to discuss scientific progress in fields they support. This year BIML’s Gary McGraw was pleased to keynote the Computer Science “Secure and Trustworthy Cyberspace” meeting.
He gave a talk on what #MLsec can learn from #swsec with a focus on technology discover, development, and commercialization. There are many parallels between the two fields. Now is a great time to be working in machine learning security...
-
Lots of excellent content on ML Security, ML, and security in this video. Have a look.
-
Right. So not only is ML going to write your code, it is also going to hack it. LOL. I guess the thought leaders out there have collectively lost their minds.
Fortunately, Taylor Armerding has some sane things to say about all this. Read his article here.
-
Adam Shostack is one of the pre-eminent experts on threat modeling. So when he publishes an article, it is always worth reading and thinking about. But Adam seems to be either naïve or insanely optimistic when it comes to AI/ML progress. ML has no actual IDEA what it’s doing. Don’t ever forget that.
This issue is so important that we plan to debate it soon in a webinar format. Contact us for details.
Read adam’s article here.
-
As a software security guy, I am definitely in tune with the idea of automated coding. But today’s “code assistants” do not have any design-level understanding of code. Plus they copy (statistically-speaking, anyway) chunks of code full of bugs.
Robert Lemos wrote a very timely article on the matter. Check it out.
-
The second in a two part darkreading series focused on machine learning data exposure and data-related risk focuses attention on protecting training data without screwing it up. For the record, we believe that technical approaches like synthetic data creation and differential privacy definitely screw up your data, sometimes so much that the ML activity you wanted to accomplish is no longer feasible.
The first article in the series can be found here. That art...
-
As part of our mission to spread the word about machine learning security far and wide, we were pleased to deliver a talk at Westmister-Canterbury in the Shenandoah Valley.
The talk posed a bit of a challenge since it was the very first “Thursday talk” delivered after COVID swept the planet. As you might imagine, seniors who are smart are very much wary of the pandemic. In the end, the live talk was delivered to around 12 people with an audience of about 90 on closed circuit TV. That,...
-
We’re pleased that BIML has helped spread the word about MLsec (that is, machine learning security engioneering) all over the world. We’ve given talks in Germany, Norway, England, and, of course, all over the United States.
And we’re always up for more. If you are interested in having BIML participate in your conference, please contact Gary McGraw through his website.
This summer, we were asked to give a talk at our local community center, the Barns of Rose Hill. We were happy to ...
-
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to continue the bi-monthly video series we’re calling BIML in the Barn.
Our fourth video talk features Professor David Evans a computer scientist at University of Virginia working on Security Engineering for Machine Learning. David is interested ...
-
This version of the Security Engineering for Machine Learning talk is focused on computer scientists familiar with algorithms and basic machine learning concepts. It was delivered 2/24/22.
You can watch the video on YouTube here https://youtu.be/Goe0Sbn5Ma8
-
In an article published in February 2022, BIML CEO Gary McGraw discusses why ML practitioners need to consider ops data exposure in addition to worrying about training data. Have a read.
This is the first in a series of two articles focused on data privacy and ML. This one, the first, focuses on ops data exposure. The second discusses training data in more detail.
-
Tickets for the Barns of Rose Hill talk are available now. Get yourself some here!
-
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to continue the bi-monthly video series we’re calling BIML in the Barn.
Our third video talk features Ram Shankar Siva Kumar a researcher at Microsoft Azure working on Adversarial Machine Learning. Of course, we prefer to call this Security Engi...
-
It turns out that operational data exposure swamps out all other kinds of data exposure and data security issues in ML, something that came as a surprise.
Check out this darkreading article detailing this line of thinking.
-
An important part of our mission at BIML is to spread the word about machine learning security. We’re interested in compelling and informative discussions of the risks of AI that get past the scary sound bite or the sexy attack story. We’re proud to introduce a bi-monthly video series we’re calling BIML in the Barn.
Our first video talk features Maritza Johnson, a professor at UC San Diego and an expert on human-centered security and privacy. As you’re about to see, Maritza combines re...
-
The (extremely) local paper in the county where Berryville is situated (rural Virginia) is distributed by mail. They also have a website, but that is an afterthought at best.
Fortunately, the Clarke Monthly is on the cutting edge of technology reporting. Here is an article featuring BIML and Security Engineering for Machine Learning.
Have a read and pass it on!
-
I gave a talk this week at a meeting hosted by Microsoft and Mitre called the 6th Security Data Science Colloquium. It was an interesting bunch (about 150 people) including the usual suspects: Microsoft, Google, Facebook, a bunch of startups and universities, and of course BIML.
I decided to rant about nomenclature, with a focus on RISKS versus ATTACKS as a central tenet of how to approach ML security. Heck, even the term “Adversarial AI” gets it wrong in all the ways. For the record, ...
-
Another week, another talk in Indiana! This time Purdue’s CERIAS center was the target. Turns out I have given “one talk per decade” at Purdue, starting with a 2001 talk (then 2009). Here is the 2021 edition.
What will I be talking about in 2031??!
-
BIML founder Gary McGraw delivered the last talk of the semester for the Center for Applied Cybersecurity Research (CACR) speakers series at Indiana University. You can watch the talk on YouTube.
If your organization is interested in having a presentation by BIML, please contact us today.
-
Some nice coverage in the security press for our work at BIML. Thanks to Rob Lemos!
-
As our MLsec work makes abundantly clear, data play a huge role in security of an ML system. Our estimation is that somewhere around 60% of all security risk in ML can be directly associated with data. And data are biased in ways that lead to serious social justice problems including racism, sexism, classism, and xenophobia. We’ve read a few ML bias papers (see the BIML Anotated Bibliography for our commentary). Turns out that social justice in ML is a thorny and difficult subject.
We...
-
We were very fortunate to have Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (and famous programmer of Copycat), join us for our regular BIML meeting.
We discussed Melanie’s new paper Abstraction and Analogy-Making in Artificial Intelligence. We talked about analogy, perception, symbols, emergent computation, machine learning, and DNNs.
A recorded version of our conversation is available, as is a video version.
We hope you enjoy what you see...
-
An important part of BIML’s mission as an institute is to spread the word about our understanding of machine learning security risk throughout the world. We recently decided to take on three college and high school interns to provide a bridge to academia and to inculcate young minds early in the intricacies of machine learning security. We introduce them here in a series of blog entries.
We are very pleased to introduce Aishwarya Seth who is a BIML University Scholar.Aishwarya i...
-
In Clarke County, a small research group is working to make technology more secure
- By MICKEY POWELL The Winchester Star
- Mar 3...
-
An important part of BIML’s mission as an institute is to spread the word about our understanding of machine learning security risk throughout the world. We recently decided to take on three college and high school interns to provide a bridge to academia and to inculcate young minds early in the intricacies of machine learning security. We introduce them here in a series of blog entries.
We are very pleased to introduce Trinity Stroud who is a BIML University Scholar.
Trinity is...
-
An important part of BIML’s mission as an institute is to spread the word about our understanding of machine learning security risk throughout the world. We recently decided to take on three college and high school interns to provide a bridge to academia and to inculcate young minds early in the intricacies of machine learning security. We introduce them here in a series of blog entries.
We are very pleased to introduce Nikil Shyamsunder who is the first BIML High School Scholar.
... -
BERRYVILLE INSTITUTE OF MACHINE LEARNING (BIML) GETS $150,000 OPEN PHILANTHROPY GRANT
—27 January 2021
Berryville Institute of Machine Learning (BIML) Gets $150,000 Open Philanthropy Grant. Funding will advance ethical AI research
Online PR News – 27-January-2021 – BERRYVILLE, VA – The Berryville Institute of Machine Learning (BIML), a research think tank dedicated to safe, secure and ethical development of AI technologies, announced today that it is the recipient of a $150,000 grant from Open Philanthropy.
BIML, which is already well known in ML circles for its pioneering document, “Ar...
-
Our recent architectural risk analysis of machine learning systems identified 78 particular risks associated with nine specific components found in most machine learning systems. In this article, we describe and discuss the 10 most important security risks of those 78.
-
BERRYVILLE, Va., Feb. 13, 2020 – The Berryville Institute of Machine Learning (BIML), a research think tank dedicated to safe, secure and ethical development of AI technologies, today released the first-ever risk framework to guide development of secure ML. The “Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning” is designed for use by developers, engineers, designers and others who are creating applications and services that use ML technologies.
... -
The first talk on BIML’s new Architectural Risk Analysis of Machine Learning Systems was delivered this Wednesday at Lord Fairfax Community College. The talk was well attended and included a remote audience attending virtually. The Winchester Star published a short article about the talk.
Berryville Institute of Machine Learning (BIML) is located in Clarke County, Virginia, an area served by Lord Fairfax Community College.
-
Recently there have been several documents published as guides to security in machine learning. In October 2019, NIST published a draft called “A Taxonomy and Terminology of Adversarial Machine Learning”. Then in November, Microsoft published several interrelated webpages laying out a threat model for AI/ML systems and tying it to MS’s existing Software Development Lifecycle. We took a look at these documents to find out what they are trying to do, what they do well, and what they lack.
T...
-
Community resources can be a double-edged sword; on the one hand, systems that have faced public scrutiny can benefit from the collective effort to break them. But nefarious individuals aren’t interested in publicizing the flaws they identify in open systems, and even large communities of developers have trouble resolving all of the flaws in such systems. Relying on publicly available information can expose your own system to risks, particularly if an attacker is able to identify similaritie...
-
ML systems rely on a number of possibly untrusted, external sources for both their data and their computation. Let’s take on data first. Mechanisms used to collect and process data for training and evaluation make an obvious target. Of course, ML engineers need to get their data somehow, and this necessarily invokes the question of trust. How does an ML system know it can trust the data it’s being fed? And, more generally, what can the system do to evaluate the collector’s trustworthiness? B...
-
Security is often about keeping secrets. Users don’t want their personal data leaked. Keys must be kept secret to avoid eavesdropping and tampering. Top-secret algorithms need to be protected from competitors. These kinds of requirements are almost always high on the list, but turn out to be far more difficult to meet than the average user may suspect.
ML system engineers may want to keep the intricacies of their system secret, including the algorithm and model used, hyperparameter and co...
-
Privacy is tricky even when ML is not involved. ML makes things ever trickier by in some sense re-representing sensitive and/or confidential data inside of the machine. This makes the original data “invisible” (at least to some users), but remember that the data are still in some sense “in there somewhere.” So, for example, if you train a classifier on sensitive medical data and you don’t consider what will happen when an attacker tries to get those data back out through a set of sophistic...
-
Keep It Simple, Stupid (often spelled out KISS) is good advice when it comes to security. Complex software (including most ML software) is at much greater risk of being inadequately implemented or poorly designed than simple software is, causing serious security challenges. Keeping software simple is necessary to avoid problems related to efficiency, maintainability, and of course, security. But software is by its very nature complex.
Machine Learning seems to defy KISS by its very natu...
-
The risk analysis of the generic ML system above uses a set of nine “components” to help categorize and explain risks found in various logical pieces. Components can be either processes or collections. Just as under...
-
The principle of least privilege states that only the minimum access necessary to perform an operation should be granted, and that access should be granted only for the minimum amount of time necessary.[i]
When you give out access to parts of a system, there is always some risk that the privileges associated with that access will be abused. For example, let’s say you are to go on vacation and you give a friend the key to your home, just to feed pets, collect mail, and so forth. Although y...
-
Even under ideal conditions, complex systems are bound to fail eventually. Failure is an unavoidable state that should always be planned for. From a security perspective, failure itself isn’t the problem so much as the tendency for many systems to exhibit insecure behavior when they fail.
The best real-world example we know is one that bridges the real world and the electronic world—credit card authentication. Big credit card companies such as Visa and MasterCard spend lots of money on au...
-
The idea behind defense in depth is to manage risk with diverse defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense hopefully prevents a full breach.
Let’s go back to our example of bank security. Why is the typical bank more secure than the typical convenience store? Because there are many redundant security measures protecting the bank, and the more measures there are, the more secure the place is.
Security cameras alone are a de...
-
Security people are quick to point out that security is like a chain. And just as a chain is only as strong as the weakest link, an ML system is only as secure as its weakest component. Want to anticipate where bad guys will attack your ML system? Well, think through which part would be easiest to attack.
ML systems are different from many other artifacts that we engineer because the data in ML are just as important (or sometimes even more important) than the learning mechanism itself....
-
Early work in security and privacy of ML has taken an “operations security” tack focused on securing an existing ML system and maintaining its data integrity. For example, Nicolas Papernot uses Salzter and Schroeder’s famous security principles to provide an operational perspective on ML security1. In our view, this work does not go far enough into ML design to satisfy our goals. Following Papernot, we directly address Salzter and Schroeder’s security principles as adapted in the book Buildi...
-
The exceptionally tasteful BIML logo was designed by Jackie McGraw. The logo incorporates both a yin/yang concept (huh, wonder where that comes from?) and a glyph that incorporates a B, and M, and an L in a clever way.
Here is the glyph:
Here is my personal logo (seen all over, but most famously on the cover of Software Security:
Here is the combined glyph plus yin/yang which ma...
-
Welcome to the BIML blog where we will (informally) write about MLsec, otherwise known as Machine Learning security. BIML is short for the Berryville Institute of Machine Learning. For what it’s worth, we think it is pretty amusing to have a “Berryville Institute” just like Santa Fe has the “Santa Fe Institute.” You go, Berryville!
BIML was born when I retired from my job of 24 years in January 2019. Many years ago as a graduate student at Indiana University, I did lots of work in...