Invited Speakers of VECoS 2021
An Orchestrated Reflection of Quality Assurance in the era of Data-Driven Intelligent Systems
Lei Ma, University of Alberta, Canada
Abstract: Deep learning (DL) systems continue to achieve substantial strides in enabling cutting-edge intelligent applications, ranging from image recognition, virtual assistant (e.g., Alex, Siri), art design, autonomous vehicles, to medical diagnosis tasks. However, the development of current DL systems still lacks systematic engineering guidance in regard to the adoption of quality, security and reliability assurance standards, as well as the available mature toolchain support. The black-box nature of DL poses challenges in the interpretation and understanding of the behaviors of DL systems as observed in the many recent failures of such systems (e.g., Tesla/Uber car accidents, and the manipulation of Alexa and Siri with hidden commands). To enable safe, secure and reliable DL systems, our research reconsiders the whole development lifecycle of DL systems with a particular focus on the notion of “secure deep learning engineering”. Based on this, we conduct a consecutive series of work towards addressing the current challenges in quality, reliability, and security assurance of general-purpose deep learning systems. In this talk, I provide a high-level overview of our team’s efforts around testing and analysis of deep learning software while highlighting a few concrete innovations that are already adopted in practice. Furthermore, I outline challenges, opportunities, my vision and future plans towards laying down the foundations for engineering safe, secure and reliable DL systems.
Short Bio: Dr. Lei Ma is currently an Associate professor and Canada CIFAR AI Chair at the University of Alberta, leading The Intelligent System Lab (ISL). He is also affiliated with Alberta Machine Intelligent Institute as an Amii Fellow, and holds a research fellow position and co-lead Intelligent Software Engineering Lab of Kyushu University, Japan. He received his Ph.D. and M.E. from the University of Tokyo, and B.E. from Shanghai Jiao Tong University. His research centers around the interdisciplinary fields of Software Engineering (SE), Security and Trustworthy AI with a special focus on the quality, reliability and security assurance of machine learning and AI solutions (e.g., self-driving cars). Many of his work were published in top-tier SE and AI venues (TSE, ICSE, FSE, ASE, ISSTA, ICML, NeurIPS, TNNLS, ACM MM, AAAI, IJCAI, ECCV, CAV). He is a recipient of more than 10 prestigious academic awards, including 3 ACM SIGSOFT Distinguished Paper Awards, a FOSS (Free-And-Open-Source) Impact Paper Award (MSR’18), a Best Paper Candidate Award (SANER’16), a Best Paper Award (HotWeb’15), etc. He won the championship of the 2015 IEEE International Search-Based Software Testing Contest. Many of his research innovations for secure and reliable AI are adopted by industry worldwide at, e.g., Nvidia, Mitsubishi. More detailed information could found at his personal website.
On Decentralized System Monitoring
Ylies Falcone, Univ. Grenoble Alpes
Abstract: Decentralized systems abound; they include modern processors, car pools, drone swarms and decentralized finance applications. The complexity of such systems as well as the unpredictability of their execution environment render exhaustive verification techniques unfeasible and make dynamic verification techniques such as monitoring desirable. In this talk, I will discuss about the decentralization of monitoring techniques to verify the runtime behavior of decentralized systems against linear-time specifications; I will overview the main results achieved during the last decade and describe some research outlooks.
Short Bio: Yliès Falcone received the Master degree (2006) and PhD (2009) in computer science from the Univ. Grenoble Alpes at Vérimag Laboratory. His research interests concern formal runtime validation techniques for various application domains, i.e., techniques aiming at evaluating whether a system meets a set of desired properties at runtime. His general scientific objective is to design models and formal methods and implement tools that can help engineers achieve more reliable software. He enjoys contributing to the Runtime Verification community by serving in the steering committee of the related conference and organizing events about Runtime Verification. He is the co-founder and co-organizer of the competition of tools for Runtime Verification and the international schools. He was an invited researcher at several places such as NASA JPL in Pasadena (USA), NICTA Canberra (Australia), Manchester University (UK) and University of Illinois at Urbana Champaign (USA). He is currently an associate professor at Univ. Grenoble Alpes.