Annotated Bibliography

As our research group reads and discusses scientific papers in MLsec, we add an entry to this bibliography. We also curate a “top 5” list.

Top 5 Papers

Arora 2018 — Multiple Meanings

Arora, Sanjeev, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. “Linear algebraic structure of word senses, with applications to polysemy.” Transactions of the Association of Computational Linguistics 6 (2018): 483-495.

Structured representations that capture distributed sub-features (micro-topics) through ML. Beyond word2vec and glove adding “semantics.”

  • Representation

Gilmer 2018 — Adversarial Examples

Gilmer, Justi, Ryan P. Adams, Ian Goodfellow, David Andersen, and George E. Dahl. “Motivating the Rules of the Game for Adversarial Example Research.” arXiv preprint 1807.06732 (2018)

Great use of realistic scenarios in a risk analysis. Hilariously snarky.

  • Representation

Jetley 2018 — On generalization and vulnerability

Jetley, Saumya, Nicholas A. Lord, and Phillip H.S.Torr. “With Friends Like These, Who Needs Adversaries?.” 32nd Conference on Neural Information Processing Systems. 2018.

Excellent paper. Driven by theory and demostrated by experimentation, generalization in DCNs trades off agains vulnerability

  • Attack-Lit-Pointers

Papernot 2018 — Building Security In for ML (IT stance)

Papernot, Nicolas. “A Marauder’s Map of Security and Privacy in Machine Learning.” arXiv preprint arXiv:1811.01134 (2018).

Tainted only by an old school IT security approach, this paper aims at the core of #MLsec but misses the mark. Too . much ops and not enough security engineering.

Wang 2018 — Transfer Learning Attacks

Wang, Bolun, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. “With great training comes great vulnerability: Practical attacks against transfer learning.” In 27th {USENIX} Security Symposium ({USENIX} Security 18), pp. 1281-1297. 2018.

Attacks against transfer learning in cascaded systems. If the set of all Trained networks is small, this work hold water. “Empirical” settings. Some NNs highly susceptible to tiny noise. Good use of confusion matrix. Dumb defense through n-version voting.

  • Attack-Lit-Pointers

Other Papers

Carlini 2017 — Wagner on Adversarial Testing

Carlini, Nicoholas and David Wagner . “Towards Evaluating the Robustness of Neural Networks” arXiv preprint arXiv:1608.04644 (2017).

Super clear treatment of adversarial attacks and pretend defenses. A little bit of coding to the tests.

Christiansen 2016 — Language Representation and Structure

Christiansen, Morten H., and Nick Chater. “The Now-or-Never bottleneck: A fundamental constraint on language.” Behavioral and Brain Sciences 39 (2016).

Too much psychology and not enough ML. This paper is about context in language representation, including look ahead and structured patterns. How big is your buffer is the main question.

Dai 2019 — Transformer-XL

Dai, Zihang, Zhilin Yang, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. “Transformer-xl: Attentive language models beyond a fixed-length context.” arXiv preprint arXiv:1901.02860 (2019).

Getting past fixed-length context through various kludges. Recursive feedback to represent previous state.

Devlin 2018— BERT (transformers) and pre-training

Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018).

Input windows and representation. Precomputing leads to transfer attacks.

Eykholt 2018— Physical Attacks on Vision

Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. “Robust physical-world attacks on deep learning visual classification.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625-1634. 2018.

Tape on the stop sign paper. Fairly naive attacks on non-robust representations that are meant to be psychologically plausible in that humans won’t notice. Many “empirical” settings.

  • Attack-Lit-Pointers

Goodman 2019 — Wagner on Adversarial Testing

Goodman, Dan and Tao Wei . “Cloud-based Image Classification Service Is Not Robust To Simple Transformations:A Forgotten Battlefield” arXiv preprint arXiv:1906.07997 (2019).

Naive experiment on cloud services using well-known methods. Real result: hints at structured noise vs statistical noise as attack type. Representation matters.

Henderson 2018 — Hacking Around with ML

Henderson, Paul, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup and David Meger. “arXiv preprint arXiv:1709.06560 (2018)

We tweaked lots of things and found some stuff. Things matter. How you measure stuff also matters.

  • Representation

Hinton 2015 — Review

LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep learning.” nature 521, no. 7553 (2015): 436.

This review from Nature covers the basics in an introductory way. Some hints at representation as a thing. Make clear that more data and faster CPUs account for the resurgence.

Jacobsen 2019 — Adversarial Examples

Jacobsen, Jörn-Henrik, Jens Behrmann, Richard Zemel, Matthias Bethge. “Mathematical explanation of adversarial vulnerability space. Includes home brew network and analysis set..” arXiv preprint 1811.00401v2 (2019)

Great use of realistic scenarios in a risk analysis. Hilariously snarky.

  • Representation

Krizhevsky 2012 — Convolutional Nets (ReLU)

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012.

Elegant series of hacks to reduce overfitting. A bit of hand waving. Reference to CPU speed and huge data sets. Depth is important, but nobody knows why.

  • Time
  • Representation

Lake 2017 — Recurrent Net Weakness

Lake, Brenden, and Marco Baroni. “Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks.” (2018).

Naive micro domain with misleading maps into human semantics (movement). An artificial attack angled with structure as weapon.

Marcus 2018 — AI Perspective on ML

Marcus, Gary. “Deep learning: A critical appraisal.” arXiv preprint arXiv:1801.00631 (2018).

General overview tainted by old school AI approach. Makes clear the overlooking of representation as essential. Some failure conditions noted, at philosophical level.

Mitchell 2019 — Model Cards

Mitchell, Margaret , Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru . “Model Cards for Model Reporting” arXiv preprint arXiv:1810.03993 (2019).

A mix of sociology and political correctness with engineering transparency. Human-centric models emphasized.

Mnih 2013 — Atari

Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. “Playing atari with deep reinforcement learning.” arXiv preprint arXiv:1312.5602 (2013).

An application of convolution nets where the game representation has been shoved through a filter. Some questions open regarding randomness in the game (making the games very hard to learn). Not dice rolling for turn, but rather random behavior that is cryptographically unpredictable. This paper made a bigger splash than it likely warranted.

Peters 2018 — ELMo

Peters, Matthew E, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. “Deep contextualized word representations.” arXiv preprint 1802.05365 (2018)

Important seminal work on ELMo. Some echoes of SDM and highly distributed representation power.

  • Representation

Phillips 2011 — Racism

Phillips, P. Jonathon, Fang Jiang, Abhijit Narvekar, Julianne Ayyad, and Alice J. O’Toole. “An other-race effect for face recognition algorithms.” ACM Transactions on Applied Perception (TAP) 8, no. 2 (2011): 14.

This paper is pretty stupid. The result is simply “when your data are racists, your system will be too” which is beyond obvious for anyone who knows how ML works. This its what happens when psych people write about ML instead of CS people

Quinn 2017 (also mm17) — Dog Walker

Quinn, Max H., Erik Conser, Jordan M. Witte, Melanie Mitchell . “Semantic Image Retrieval via Active Grounding of Visual Situations” arXiv preprint arXiv:1711.00088 (2017).

Building up representations with a hybrid Copycat/NN model. Hofstadterian model. Time as an essential component in building up a representation.

Rahwan 2019 — Towards a Study of Machine Behavior

Rahwan, Iyad, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, Nicholas R. Jennings, Ece Kamar, Isabel M. Kloumann, Hugo Larochelle, David Lazer, Richard McElreath, Alan Mislove, David C. Parkes, Alex ‘Sandy’ Pentland, Margaret E. Roberts, Azim Shariff, Joshua B. Tenenbaum & Michael Wellman. “Machine behavior.” Nature 568 (2019): 477-486.

Social science on machines. Very clear treatment. Trinity of trouble hinted at. Good analogs for security. Is ML code/data open source or not?

Sculley 2015 — Software Engineering Would Help

Sculley, David, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. “Hidden technical debt in machine learning systems.” In Advances in neural information processing systems, pp. 2503-2511. 2015.

Random kludges built of interlocked pieces and parts is a bad idea. This applies to ML as well. Light on analysis and mis-directed on focus.

Sculley-ccard 2014 — Technical Debt

Sculley, D., Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, and Michael Young. “Machine learning: The high interest credit card of technical debt.” (2014).

A diatribe against deadline and just making stuff work. Naive criticism of flaws.

  • Representation

Sculley 2018 — Building Security In for ML (IT stance)

Sculley, D., Jasper Snoek, Ali Rahmini, Alex Wiltschko. “Winner’s Curse? On Pace, Progress, and Empirical Rigor.” ICLR 2018 Workshop paper (2018).

Argues for a scientific approach. General and pretty obvious.

Silver 2017— AlphaGo

Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert et al. “Mastering the game of go without human knowledge.” Nature 550, no. 7676 (2017): 354.

AlphaGo trains itself by playing itself. Surprising and high profile results. Monte Carlo tree search seems to underly the results (which representations are amenable to that kind of search?). Unclear how general these results are or if they only apply to certain games with fixed rules and perfect knowledge.

Springer 2018 — Sparse Coding is Good

Arora, Sanjeev, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. “Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples.” Transactions of the Association of Computational Linguistics 6 (2018): 483-495.

Important theory, but silly experiment. Hints at the importance of context, concept activation, and dynamic representation. Explores limits of transfer attacks WRT representation

  • Representation

Sundararajan 2017 — Explaining Networks

Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. “Axiomatic Attribution for Deep Networks” arXiv preprint arXiv:1703.01365 (2017).

A strangely-written paper trying to get to the heart of describing why a network does what it does. Quirky use of mathematical style. Hard to understand and opaque.

  • Representation

Vaswani 2017 — BERT percursor

Jetley, Saumya, Nicholas A. Lord, and Phillip H.S.Torr. “Attention is All You Need.” 31st Conference on Neural Information Processing Systems. 2017.

BERT percursor

  • Attack-Lit-Pointers

Videos and Popular Press

Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?
A: Because Keynote Speakers Make Bad Life Decisions and Are Poor Role Models
James Mickens, Harvard University
27th Usenix Security Symposium

Ali Rahimi’s talk at NIPS(NIPS 2017 Test-of-time award presentation)

Ingredients of Intelligence VIDEO, Brenden Lake explains why he builds computer programs that seek to mimic the way humans think.

Brenden Lake, NYU, March 26, 2018 | EmTech Digital


Douglas R. Hofstadter, “The Shallowness of Google Translate” from the Atlantic Monthly