Master Class

Explanation and Verification of Machine Learning Models

The CPAIOR Master Class is an annual one-day event to disseminate cutting-edge research in the areas of constraint programming, artificial intelligence, operations research, and the integration of these fields.

Abstract: Machine learning models are among the most successful artificial intelligence technologies making impact in a variety of practical applications. However, many concerns were raised about lack of understanding of the decision making process behind this technology. Therefore, a natural question is whether we can rely on decisions that ML models make. There are two ways to approach this problem that we cover in the master class. The first approach is to define properties that we expect a ML model to satisfy and verify these properties. Verification guarantees can reassure the user that the model has an expected behavior. Our speakers will present state-of-the-art techniques for verification of ML models. The second approach is to better understand the decision making process of ML models, e.g.the user can require that a ML-based decision must be accompanied by an explanation for this decision. Our speakers will overview recent advances in explainability of ML models.

List of confirmed speakers:

Invited Talk

Alessio Lomuscio: Towards verifying AI systems based on deep neural networks

Alessio Lomuscio (http://www.doc.ic.ac.uk/~alessio) is a Professor in the Department of Computing at Imperial College London (UK), where he leads the Verification of Autonomous Systems Group. He serves as Deputy Director for the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence. He is a Distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. Alessio's research interests concern the development of safe artificial intelligence. Since 2000 he has worked on the development of formal methods for the verification of autonomous systems and multi-agent systems. To this end, he has put forward several methods based on model checking and various forms abstraction to verify AI systems, including robotic swarms, against AI-based specifications. He is presently focusing on the development of methods to ascertain the correctness of AI systems based on deep networks. He has published over 150 papers in AI conferences (including IJCAI, KR, AAAI, ECAI, AAMAS), verification and formal methods conferences (CAV, SEFM), and international journals (AIJ, JAIR, ACM ToCL, JAAMAS). He is presently serving as general co-chair for AAMAS 2021 and Editorial Board member for AIJ, JAIR, JAAMAS, and IEEE TAI.

Towards verifying AI systems based on deep neural networks

There has been considerable interest of late in the implementation of AI methods in a broad range of applications, including autonomous systems. A key difficulty in the use of AI is its certification. Formal verification has long been developed to assess and debug traditional software systems, including hardware, but its development to AI-systems remains problematic. A particularly severe open challenge is the verification of AI systems based on neural networks. In this talk I will try to summarise some of the recent work in the community towards the verification of neural systems. I will cover both the verification of deep neural networks in isolation, and of closed-loop systems, where some of the agent components are realised via deep neural networks. I will survey both exact and approximate methods, including MILP-based approaches, dependency analysis, linear relaxations, and symbolic interval propagation. Time permitting, I will demo some of the present open-source toolkits developed and maintained in the verification of autonomous systems at Imperial College London, including Venus, Verinet, and VenMAS. We will observe that the verification of neural models with hundreds of thousands of nodes is now feasible with further advances likely to be achieved in the near future.

Invited Talk

Gagandeep Singh: Certified Artificial Intelligence

Gagandeep Singh will be starting as a tenure-track Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC) from Fall 2021. He is currently working with VMWare Research. His research interests lie at the intersection of artificial intelligence (AI) and programming languages. His long-term goal is to design end-to-end automated formal reasoning tools for real-world systems with both software and AI components such as autonomous vehicles, robots, and AI-powered healthcare devices. Previously, he obtained a Ph.D. in Computer Science from ETH Zurich in 2020, where he designed new scalable and precise automated reasoning methods and tools for programs and deep neural networks.

Certified Artificial Intelligence

Despite surpassing human-level performance in many challenging domains such as vision, planning, and natural sciences, there remain concerns about the fragility of modern data-driven AI systems when applied in the real-world, which poses a threat to their wider adoption. Indeed, obtaining AI systems theoretically guaranteed to be safe and reliable is a fundamental challenge of critical importance. In this talk, I will present a path towards addressing this fundamental problem. Specifically, I will introduce new mathematical methods combining convex optimization with the classical abstract interpretation framework that enables scalable and precise logical reasoning about the (potentially infinite number of) behaviors of an AI system (e.g., a deep neural network). I will then show how these methods enable both the creation of state-of-the-art automated verifiers for modern AI systems and the design of new provable training techniques. Finally, I will outline several promising future research directions.

Invited Talk

Guy Katz: Safety in AI Systems: SMT-Based Verification of Deep Neural Networks

Guy Katz is an assistant professor at the Hebrew University of Jerusalem, Israel. He received his Ph.D. at the Weizmann Institute of Science in 2015. His research interests lie at the intersection between Formal Methods and Software Engineering, and in particular in the application of formal methods to software systems with components generated via machine learning. https://www.katz-lab.com/

Safety in AI Systems: SMT-Based Verification of Deep Neural Networks

Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present an SMT-based technique for formally verifying properties of deep neural networks (or providing counter-examples). The technique can be used to prove various safety properties of neural networks, or to measure a network's robustness to adversarial inputs. Our approach is based on the simplex method, extended to handle piecewise-linear constraints, such as Rectified Linear Units (ReLUs), which are a crucial ingredient in many modern neural networks. We evaluate our technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). We will also briefly discuss some of the cutting edge research in this field.

Invited Talk

Guy Van den Broeck: Reasoning About the Probabilistic Behavior of Classifiers

Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His work has been recognized with best paper awards from key artificial intelligence venues such as UAI, ILP, KR, and AAAI (honorable mention). He also serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR). Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.

Reasoning About the Probabilistic Behavior of Classifiers

This talk will overview our recent work on using probabilistic models of the feature distribution to reason about the expected behavior of learned classifiers. This includes probabilistic sufficient explanations, which formulate explaining an instance of classification as choosing the `simplest' subset of features such that only observing those features is probabilistically `sufficient' to explain the classification. That is, sufficient to give us strong probabilistic guarantees that the model will behave similarly when all features are observed under the data distribution. We leverage tractable probabilistic reasoning tools such as probabilistic circuits and expected predictions to design scalable algorithms for these tasks.

Invited Talk

Joao Marques-Silva: Formal Reasoning Methods in Machine Learning Explainability

Joao Marques-Silva is Research Director at CNRS, being affiliated with IRIT, Toulouse, France. He also leads the Chair on Deep Learner Explanation and Verification of the Artificial and Natural Intelligence Toulouse Institute (ANITI). Before joining CNRS, he was affiliated with the University of Lisbon in Portugal, University College Dublin in Ireland, and University of Southampton in the United Kingdom. Dr. Marques-Silva is a Fellow of the IEEE, and was a recipient of the 2009 CAV Award for fundamental contributions to the development of high-performance Boolean satisfiability solvers.

Formal Reasoning Methods in Machine Learning Explainability

The envisioned applications of machine learning (ML) in safety critical application hinges on systems that are robust in their operation and that can be trusted. This talk overviews recent efforts on applying automated reasoning tools in explaining non-interpretable (black-box) ML models, and relates such efforts with past work on reasoning about inconsistent logic formulas. Moreover, the talk details the computation of rigorous explanations of black-box models, and how these serve for assessing the quality of widely used heuristic explanation approaches. The talk also covers important properties of rigorous explanations, namely duality properties between different kinds of explanation. Finally, the talk briefly overviews ongoing work on tratactble approaches for explainability.

Invited Talk

Sameer Singh: Explaining Machine Learning Predictions

Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine (UCI). He is working primarily on robustness and interpretability of machine learning algorithms, along with models that reason with text and structure for natural language processing. Sameer was a postdoctoral researcher at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs. He was selected as a DARPA Riser, and has been awarded the grand prize in the Yelp dataset challenge, the Yahoo! Key Scientific Challenges, UCI Mid-Career Excellence in research award, and recently received the Hellman and the Noyce Faculty Fellowships. His group has received funding from Allen Institute for AI, Amazon, NSF, DARPA, Adobe Research, Base 11, and FICO. Sameer has published extensively at machine learning and natural language processing venues, including paper awards at KDD 2016, ACL 2018, EMNLP 2019, AKBC 2020, and ACL 2020.

Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities

As machine learning is deployed in all aspects of society, it has become increasingly important to ensure stakeholders understand and trust these models. Decision makers must have a clear understanding of the model behavior so they can diagnose errors and potential biases in these models, and decide when and how to employ them. However, most accurate models that are deployed in practice are not interpretable, making it difficult for users to understand where the predictions are coming from, and thus, difficult to trust. Recent work on explanation techniques in machine learning offers an attractive solution: they provide intuitive explanations for “any” machine learning model by approximating complex machine learning models with simpler ones. In this tutorial, we will discuss several post hoc explanation methods, and focus on their advantages and shortcomings. For each family of explanation techniques, we will provide their problem setup, prominent methods, example applications, and finally, discuss their vulnerabilities and shortcomings. We hope to provide a practical and insightful introduction to explainability in machine learning. (this material was developed in collaboration with Himabindu Lakkaraju and Julius Adebayo)

Master class organisers

Alexey Ignatiev
Nina Narodytska