BERRYVILLE INSTITUTE OF MACHINE LEARNING (BIML) GETS $150,000 OPEN PHILANTHROPY GRANT

Berryville Institute of Machine Learning (BIML) Gets $150,000 Open Philanthropy Grant. Funding will advance ethical AI research

Online PR News – 27-January-2021 – BERRYVILLE, VA – The Berryville Institute of Machine Learning (BIML), a research think tank dedicated to safe, secure and ethical development of AI technologies, announced today that it is the recipient of a $150,000 grant from Open Philanthropy.

BIML, which is already well known in ML circles for its pioneering document, “Architectural Risk Analysis of Machine Learning Systems: Toward More Secure Machine Learning,” will use the Open Philanthropy grant to further its scientific research on Machine Learning risk and get the word out more widely through talks, tutorials, and publications.“In what is by now an all too familiar pattern our embrace of advanced ML technology is outpacing an understanding of the security risks its use drags along with it. AI and ML automation continues to accelerate at an alarming pace. At BIML we’re dedicated to exposing and elucidating security risk in ML systems. We are pleased as punch that Open Philanthropy is pouring accelerant on our spark.”

“In a future where machine learning shapes the trajectory of humanity, we’ll need to see substantially more attention on thoroughly analyzing ML systems from a security and safety standpoint,” said Catherine Olsson, Senior Program Associate for Potential Risks from Advanced Artificial Intelligence at Open Philanthropy. “We are excited to see that BIML is taking a holistic, security-engineering inspired view, that considers both accidental risk and intentional misuse risk. We hope this funding will support the growth of a strong community of ML security practitioners at the intersection of real-world systems and basic research.”

Early work on ML security focuses on specific failures, including systems that learn to be sexist, racist and xenophobic, and systems that can be manipulated by attackers. The BIML ML Security Risk Framework details the top 10 security risks in ML systems today. It is designed for use by developers, engineers, designers and others who are creating applications and services that use ML technologies, and can be practically applied in the early design and development phases of any ML project.

“In what is by now an all too familiar pattern, our embrace of advanced ML technology is outpacing an understanding of the security risks its use drags along with it. AI and ML automation continues to accelerate at an alarming pace,” said Dr. Gary McGraw, co-founder of BIML and world renowned software security pioneer. “At BIML, we’re dedicated to exposing and elucidating security risk in ML systems. We are pleased as punch that Open Philanthropy is pouring accelerant on our spark.”

About BIML

The Berryville Institute of Machine Learning was created in 2019 to address security issues with ML and AI. The organization was founded by Gary McGraw, author, long-time security expert and CTO of Cigital (acquired by Synopsys); Harold Figueroa, director of Machine Intelligence Research and Applications (MIRA) Lab at Ntrepid; Victor Shepardson, an artist and research engineer at Ntrepid; and Richie Bonett, a systems engineer at Verisign. BIML is headquartered in Berryville, Virginia. For more information, visit http://berryvilleiml.com/.

About Open Philanthropy

Open Philanthropy identifies outstanding giving opportunities, makes grants, follows the results, and publishes its findings. Its mission is to give as effectively as it can and share the findings openly so that anyone can build on them.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>