Keynote Speakers


Prof. Dr. Yi Ma

IEEE Fellow, ACM Fellow, SIAM Fellow

Electrical Engineering and Computer Sciences
University of California, Berkeley

Yi Ma is a Professor at the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interests include computer vision, high-dimensional data analysis, and intelligent systems. Yi received his Bachelor’s degrees in Automation and Applied Mathematics from Tsinghua University in 1995, two Masters degrees in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley in 2000.  He has been on the faculty of UIUC ECE from 2000 to 2011, the principal researcher and manager of the Visual Computing group of Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the School of Information Science and Technology of ShanghaiTech University from 2014 to 2017. He then joined the faculty of UC Berkeley EECS in 2018. He has published about 60 journal papers, 120 conference papers, and three textbooks in computer vision, generalized principal component analysis, and high-dimensional data analysis. He received the NSF Career award in 2004 and the ONR Young Investigator award in 2005. He also received the David Marr prize in computer vision from ICCV 1999 and best paper awards from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV 2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and SIAM. 

Speech Title: Deep (Convolution) Networks from First Principles

Abstract: In this talk, we offer an entirely “white box’’ interpretation of deep (convolution) networks from the perspective of data compression (and group invariance). In particular, we show how modern deep layered architectures, linear (convolution) operators and nonlinear activations, and even all parameters can be derived from the principle of maximizing rate reduction (with group invariance). All layers, operators, and parameters of the network are explicitly constructed via forward propagation, instead of learned via back propagation. All components of so-obtained network, called ReduNet, have precise optimization, geometric, and statistical interpretation. There are also several nice surprises from this principled approach: it reveals a fundamental tradeoff between invariance and sparsity for class separability; it reveals a fundamental connection between deep networks and Fourier transform for group invariance – the computational advantage in the spectral domain (why spiking neurons?); this approach also clarifies the mathematical role of forward propagation (optimization) and backward propagation (variation). In particular, the so-obtained ReduNet is amenable to fine-tuning via both forward and backward (stochastic) propagation, both for optimizing the same objective.

This is joint work with students Yaodong Yu, Ryan Chan, Haozhi Qi of Berkeley, Dr. Chong You now at Google Research, and Professor John Wright of Columbia University. A related paper can be found at:



Dr. Wenjun Zeng (Kevin)

IEEE Fellow, Sr. Principal Research Manager

Microsoft Research Asia, Beijing, China


Wenjun (Kevin) Zeng is a Sr. Principal Research Manager and a member of the Senior Leadership Team at Microsoft Research Asia. He has been leading the video analytics research powering the Microsoft Cognitive Services, Azure Media Analytics Services, Microsoft Office, Dynamics, and Windows Machine Learning since 2014. He was with the Computer Science Dept. of Univ. of Missouri from 2003 to 2016, most recently as a Full Professor. Prior to that, he worked for PacketVideo Corp, San Diego, CA, Sharp Labs of America, Camas, WA, Bell Labs, Murray Hill, NJ, and Panasonic Technology, Princeton, NJ. He has contributed significantly to the development of international standards (ISO MPEG, JPEG2000, and Open Mobile Alliance). He received his B.E., M.S., and Ph.D. degrees from Tsinghua Univ., the Univ. of Notre Dame, and Princeton Univ., respectively. He is on the Editorial Board of International Journal of Computer Vision, and was an Associate Editor-in-Chief, Associate Editor, or Steering Committee members for a number of IEEE journals. He has served as the General Chair or TPC Chair for several IEEE Conferences (e.g., ICME’2018, ICIP’2017). He is a Fellow of the IEEE.

Speech Title: Unlocking the Potential of Disentangled Representation Learning

Abstract: It has been argued that for artificial intelligence to fundamentally understand the world around us, it must learn to identify and disentangle the underlying explanatory factors hidden in the observed environment of low-level sensory data. Significant progress has been made in recent years in both theoretical development and practical solutions of disentangled representation learning, yet both seem to be struggling facing some fundamental limitations. In this talk, I will first provide an overview of the recent developments in the field and identify some major challenges. I will then introduce some of the recent works developed at Microsoft Research Asia that strive to address some of the challenges, both in theory and in practice. In particular we present a theoretical framework that unifies the Group-theory based definition and popular VAE-based systems, and a practical system (Disentaglement via Contrast (DisCo)) that well address the challenge of the trade-off between disentangling capability and generation quality. I will also discuss some applications (e.g., image editing, style transfer, content creation, domain generalization) of this powerful concept and a few future directions.  

Prof. Dr. Lin Weisi

IEEE Fellow, IET Fellow

Nanyang Technological University, Singapore

Lin Weisi is an active researcher in intelligent image processing, perception-based signal modelling and assessment, video compression, and multimedia communication. He had been the Lab Head, Visual Processing, in Institute for Infocomm Research (I2R). He is a Professor in School of Computer Science and Engineering, Nanyang Technological University, where he also served as the Associate Chair (Research).  

He is a Fellow of IEEE and IET, and has been awarded Highly Cited Researcher 2019 and 2020 by Web of Science. He has elected as a Distinguished Lecturer in both IEEE Circuits and Systems Society (2016-17) and Asia-Pacific Signal and Information Processing Association (2012-13), and given keynote/invited/tutorial/panel talks to 30+ international conferences. He has been an Associate Editor for IEEE Trans. Image Process., IEEE Trans. Circuits Syst. Video Technol., IEEE Trans. Multimedia, IEEE Signal Process. Lett., Quality and User Experience, and J. Visual Commun. Image Represent. He also chaired the IEEE MMTC QoE Interest Group (2012-2014); he has been a TP Chair for IEEE 2013, QoMEX 2014, PV 2015, PCM 2012 and IEEE VCIP 2017. He believes that good theory is practical, and has delivered 10+ major systems and modules for industrial deployment with the technology developed.  

Speech Title: From Weber Law to Data-driven Modeling: toward JND in Human Perception 

Abstract: Just-Noticeable Difference (JND) refers to the minimum signal change for it to be sensed by the human being, and its formulation and computationally-modeling are the prerequisite for user-centric designs for turning human perceptual limitation into meaningful system advantages. In this talk, systematic views and a classification will be first presented on JND research, since the pioneering work in Ernst Weber’s time.  Then, computational models and applications for visual JND, which represent the majority of the related research so far, are to be reviewed, from conventionally handcrafted approaches to recently emerging data-driven models. Furthermore, initial research attempts will be introduced regarding audio, smell, and haptics/temperature signals, as well as cross-modality efforts. Finally, possible future opportunities are to be discussed.



Prof. Dr. Guoqiang Zhong

Ocean University of China, China

Guoqiang Zhong receive his Ph.D. from Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, China, in 2011. Between October 2011 and July 2013, he is a Postdoctoral Fellow at University of Quebec, Montreal, Canada. Since March 2014, he has been an associate professor, and then a full professor, at Ocean University of China, Qingdao, China. He has published 4 books, 4 book chapters and more than 80 technical papers in the areas of artificial intelligence, pattern recognition, machine learning and data mining. He has served as PC member/reviewer for many international conferences and journals, such as IEEE TNNLS, IEEE TKDE, IEEE TCSVT, Pattern Recognition, AAAI, AISTATS and ICPR. He has been awarded outstanding reviewer by several journals, such as Pattern Recognition, Knowledge-Based Systems and Neurocomputing. He has won the best paper award of BICS2019 and the APNNS Young Researcher Award. He is a member of ACM, IEEE, IAPR, a senior member of CCF, and a professional committee member of CAAI-PR, CAA-PRMI and CSIG-DIAR.

Speech Title: Automatic Design of Deep Neural Networks
Abstract: Deep neural networks (DNNs) have been widely used in many applications, such as pattern recognition and computer vision. However, design of DNNs needs expertise to deal with lots of hyperparameters and select a proper structure from many possible configurations. In this talk, I will present some novel automatic design methods of DNNs, including DNA Computing Inspired Deep Networks Design, Automatic Design of Deep Networks with Neural Blocks, AutoML for DenseNet Compression, Differentiable Light-Weight Architecture Search, and Generative Neural Architecture Search. These methods comprehensively presents the state-of-the-art of the neural architecture search area.