BIML keynotes RVA Tech’s annual Women in Tech event, Richmond, VA

Richmond’s thriving tech community came together in force for the Richmond Technology Council’s rvatech/tech’s annual Women in Tech event. BIML’s Katie McMahon delivered the opening keynote address to a packed audience at the Dewey Gottwald Center. This year’s event saw record attendance, drawing engineers, data scientists, cybersecurity specialists, CIOs, CTOs, entrepreneurs, product leaders, members of the state administration, and representatives from the Governor’s AI Task Force.

In her keynote, McMahon broadly addressed the topic of “AI Overwhelm,” and delved into BIML’s architectural risk analysis of large language models (LLMs). She left the audience more aware of the risks associated with LLMs, while also encouraging a thoughtful and mindful approach to building products and services.

McMahon also spoke about “GenV” – Generation Voice – a term she coined in 2015 to describe those born from 2010 onward, the first digital native generation to grow up accustomed to speaking with computers. With the meteoric rise of ChatGPT and other increasingly sophisticated LLM chatbots, the implications for the user experience are far-reaching, as users increasingly leverage voice interactions and natural conversational ‘dialogue’ with computing power.

The audience was exceptional, fully attentive, engaged, and receptive to the keynote. One of the biggest compliments came from two data scientists who were scheduled to present a technical workshop later in the day, titled “Clearing the Fog: Diagnosing Hallucinations in your LLMs.” After the keynote, they approached McMahon and enthusiastically announced, “We loved your talk! And we are now going to start calling it ‘Wrongness’!”

This year’s Women in Tech event at RVA Tech once again demonstrated the depth and breadth of talent within Richmond’s thriving tech ecosystem. The keynote address that Katie presented provided thought-provoking insights into the challenges and opportunities presented by the rapid advancements in artificial intelligence.

Note: Katie would like to expressly thank rva/tech and their wonderful planning committee, including Emily Mercer (pictured in photo with Katie), Chris Burroughs and Jazmyn Ward, for producing a wonderful event and graciously hosting her.

All Your LLM Are Belong to Us

We didn’t want to rain on the Davos parade, so we waited until this week to release our latest piece of work. Our paper “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” spotlights what we view as major concerns with foundation model LLMs as well as their adaptations and applications.

We are fans of ML and “AI” (which the whole world tilted towards in 2023, fawning over the latest models with both awe and apprehension). We’re calling out the inherent risks. Not hand wavy stuff—we’ve spent the past year reading science publications, dissecting the research ideas, understanding the math, testing models, parsing through the noise, and ultimately analyzing LLMs through the lens of security design and architecture. We took the tool we invented for ML security risk analysis in 2020 (see our earlier paper, “Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning”) and applied it to LLMs specifically.

We found 81 risks overall, distilled a Top Ten (Risks) list, and shined a spotlight on 23 critical risks inherent in the black box LLM foundation models.

And now 2024 is off and running. It will be the year of “AI Governance” in name and (optimistic) intent. In practice, however, it’s on pace to be a shitshow for democracy as regulators run like hell just to get to the starting line.

The Slovak parliamentary election deepfake debacle, is the tip of the iceberg. OpenAI tried to get ahead of concerns that their technology may be used to influence the US Presidential Election in nefarious ways by posting its plans to deter misinformation. The irony is that OpenAI trained its models on a corpus so large that it holds vast globs of crazy rhetoric, conspiracy theories, fake news, and other pollution which its stochastic models will draw upon and (predictably) spit out…that will, in turn, add to the ever amassing pile of garbage-strewn data in the world, which future LLM foundation models will ingest, … See the problem here? That’s recursive pollution.

It’s the Data, stupid. (We sure wish it were that simple, anyway.)

See our official Press Release here.