Plenary Speakers

Edward Y. Chang

President of HTC Research & Healthcare (DeepQ)

Monday, September 23, 09:00-10:00

Advancing Healthcare with AI, VR, and Blockchain

The cost of healthcare continues to rise because of the dilemma of satisfying infinite needs with only finite resources.  Recent advances in artificial intelligence may help combat the issue of limited resources with AI facilitated diagnoses. However, many technical challenges remain to be addressed.  This talk shares our experiences in dealing with AI training data limited by quantity and diversity via transfer learning, knowledge-fused GANs, and REFUEL. MedXchange, a collaborative effort with Stanford using multi-layer blockchain to preserve data privacy and security, is also presented. Finally, this talk presents how VR/AR can effectively facilitate medical education and surgery.

Edward Chang has acted as the President of AI Research and Healthcare (DeepQ) at HTC since 2012.  He also currently serves as an adjunct professor at Stanford University. His most recent notable work is co-leading the DeepQ project to win the XPRIZE medical IoT contest in 2017 with a 1M USD prize. Prior to his current posts, Ed was a director of Google Research from 2006 to 2012, leading research and development in areas including scalable machine learning, indoor localization, and Google Q&A. His contributions in data-driven machine learning (US patents 8798375 and 9547914) and his ImageNet sponsorship helped fuel the success of AlexNet and the recent resurgence of AI. His developed open-source codes in parallel SVMs, parallel LDA, parallel spectral clustering, and parallel frequent itemset mining (adopted by Berkeley Spark) have been collectively downloaded over 30,000 times. Prior to Google, Ed was a full professor of Electrical & Computer Engineering at the University of California, Santa Barbara. He joined UCSB in 1999 after receiving his PhD from Stanford University. Ed is an IEEE Fellow for his contributions to scalable machine learning.

Pierre Vandergheynst

Vice-President for Education at the Ecole Polytechnique Fédérale de Lausanne (EPFL)

Tuesday, September 24, 09:00-10:00

Signal Processing on Graphs. Past. Present. Future? 

Signal Processing on Graphs is a recent body of work broadly aiming at bringing the power of digital signal processing to a large class of data defined on graphs or networks. It has quickly established itself with numerous interesting results bridging the language of signal processing with machine learning, but also by leveraging computational methods from numerical linear algebra. In this plenary, we will review the basics of SP on graphs and some of its most interesting results (sampling, interpolation, computations among others) and we will put an emphasis on how these methods, with their signal processing roots, offer new insights into network science or novel AI systems.

Pierre Vandergheynst is Professor of Electrical Engineering at the Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also holds a courtesy appointment in Computer Science. A theoretical physicist by training, Pierre is a renown expert in the mathematical modelling of complex data. His current research focuses on data processing with graph-based methods with a particular emphasis on machine learning and network science. Pierre Vandergheynst has served as associate editor of multiple flagship journals, such as the IEEE Transactions on Signal Processing or SIAM Imaging Sciences. He is the author or co-author of more than 100 published technical papers and has received several best paper awards from technical societies. He was awarded the Apple ARTS award in 2007 and the De Boelpaepe prize of the Royal Academy of Sciences of Belgium in 2010.



Yann LeCun

Facebook AI Research & New York University

Wednesday, September 25, 09:00-10:00

Self-Supervised Learning: the Future of Signal Understanding?  

Deep learning has caused revolutions in computer perception, signal restoration/reconstruction, signal synthesis, natural language understanding and control. But almost all these successes largely rely on supervised learning, where the machine is required to predict human-provided annotations. For control and game AI, most systems use model-free reinforcement learning, which requires too many trials to be practical in the real world. In contrast, animals and humans seem to learn vast amounts of knowledge about the world through mere observation and occasional actions.

Based on the hypothesis that prediction is the essence of intelligence, self-supervised learning (SSL) purports to train a machine to predict missing information, for example predicting missing words in a text, occulted parts of an image, future frames in a video, and generally "filling in the blanks". SSL approaches have been very successful in natural language processing, but less so in image understanding because of the difficulty of modeling uncertainty in high-dimensional continuous spaces. A general energy-based formulation of SSL will be presented which relies on regularized latent variable models. These models yield excellent performance in image completion and video prediction. A number of applications will be described, including using a latent-variable video prediction model to train autonomous cars to drive defensively.

Yann LeCun is Director of AI Research at Facebook and Silver Professor at New York University, affiliated with the Courant Institute, the Center for Neural Science and the Center for Data Science, for which he served as founding director until 2014. He received an EE Diploma from ESIEE (Paris) in 1983, a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook, while remaining on the NYU Faculty part-time. He was visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience. He is best known for his work in deep learning and the invention of the convolutional network method which is widely used for image, video and speech recognition. He is a member of the US National Academy of Engineering, the recipient of the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, 2018 ACM Turing Award, and an honorary doctorate from IPN, Mexico.