
Gary McGraw
Gary McGraw is co-founder of the Berryville Institute of Machine Learning. He is a globally recognized authority on software security and the author of eight best selling books on this topic.
His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series.
Dr. McGraw has also written over 100 peer-reviewed scientific publications. Gary serves on the Advisory Boards of Irius Risk, Maxmyinterest, Runsafe Security, and Secure Code Warrior.
He has also served as a Board member of Cigital and Codiscope (acquired by Synopsys) and as Advisor to CodeDX (acquired by Synopsys), Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). Gary produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years.
His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering.
KEYNOTE
Security Engineering for Machine Learning
Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however.
ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level.
Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general. A list of the top five (of 78 known) ML security risks will be presented.
TECHNICAL TRACK
How to Avoid the Top Ten Software Security Flaws
Software security defects come in two categories: bugs in the implementation and flaws in the design. In the commercial marketplace, much more attention has been paid to finding and fixing bugs than has been paid to finding and fixing flaws. That is because automatically identifying bugs is a much easier problem than identifying design flaws.
The IEEE Center for Secure Design was founded to address this issue head on. My presentation will introduce and discuss how to avoid the top ten software security flaws. The content was developed in concert with Irius Risk, Twitter, Google, Synopsys, HP, Sadosky Foundation of Argentina, George Washington University, Intel/McAfee, RSA, University of Washington, EMC, Harvard University, and Athens University of Economics and Business.
During the talk, I will introduce and discuss how to avoid the top ten software security design flaws. It's important, of course, to know that these flaws account for half of the defects commonly encountered in software security. But more important still is learning how to avoid these problems when designing a new system or revisiting an existing system.
Read more about Gary McGraw here