Decipher covers the White House AI Executive Order, with the last word to BIML. Read the article from October 31, 2023 here.
Much of what the executive order is trying to accomplish are things that the software and security communities have been working on for decades, with limited success.
“We already tried this in security and it didn’t work. It feels like we already learned this lesson. It’s too late. The only way to understand these systems is to understand the data from which they’re built. We’re behind the eight ball on this,” said Gary McGraw, CEO of the Berryville Institute of Machine Learning, who has been studying software security for more than 25 years and is now focused on AI and machine learning security.
“The big data sets are already being walled off and new systems can’t be trained on them. Google, Meta, Apple, those companies have them and they’re not sharing. The worst future is that we have data feudalism.”
Another challenge in the effort to build safer and less biased models is the quality of the data on which those systems are being trained. Inaccurate, biased, or incomplete data going in will lead to poor results coming out.
“We’re building this recursive data pollution problem and we don’t know how to address it. Anything trained on a huge pile of data is going to reflect the data that it ate,” McGraw said. “These models are going out and grabbing all of these bad inputs that in a lot of cases were outputs from the models themselves.”
“It’s good that people are thinking about this problem. I just wish the answer from the government wasn’t red teaming. You can’t test your way out of this problem.”