Prof. Xudong Jiang, Nanyang Technological University, Singapore
IEEE Fellow

蒋旭东教授拥有中国电子科技大学(UESTC)的工程学士和工程硕士学位,以及德国汉堡赫尔穆特施密特大学的博士学位。他于2004年加入新加坡南洋理工大学(NTU),担任教员,并于2005年至2011年担任信息安全中心主任。目前,他是南洋理工大学的教授。蒋博士拥有7项专利,在IEEE期刊上发表了200多篇论文,其中IEEE T-PAMI论文6篇,IEEE T-IP论文12篇。他的三篇期刊论文被ESI列为工程学术领域被引用次数最高的1%论文。他于2015年至2017年担任IEEE信号处理学会的IFS TC成员,2014年至2018 年担任IEEE信号处理信函的副主编,并于2016 年至2020年担任IEEE图像处理事务副主编。蒋博士目前是IEEE Fellow, 并担任IEEE Transactions on Image Processing的高级区域编辑和IET Biometrics的主编。他目前的研究兴趣包括图像处理,模式识别,计算机视觉,机器学习和生物识别技术。

 

Xudong Jiang (Fellow, IEEE) received the bachelor’s and master’s degrees from University of Electronic Science and Technology of China, and the Ph.D. degree Helmut Schmidt University, Hamburg, Germany. From 1998 to 2004, he was with Institute for Infocomm Research, A*STAR, Singapore, as a Lead Scientist, and the Head of the Biometrics Laboratory. He joined Nanyang Technological University (NTU), Singapore, as a Faculty Member in 2004, where he served as the Director of the Centre for Information Security from 2005 to 2011. He is currently a professor with the School of EEE, NTU and serves as Director of Centre for Information Sciences and Systems. He has authored over 200 papers with over 60 papers in IEEE journals including 10 papers in T-PAMI and 18 papers in T-IP, and over 30 papers in top conferences such as CVPR/ICCV/ECCV/AAAI/ICLR/NeurIPS. His current research interests include computer vision, machine learning, pattern recognition, image processing, and biometrics. Dr. Jiang served as Associate Editors for IEEE SPL and IEEE T-IP. Currently he is Fellow of IEEE and serves as a Senior Area Editor for IEEE T-IP and the Editor-in-Chief for IET Biometrics.

 

Speech Title: How Deep CNN Revolutionizes MLP and How Transformer Revolutionizes CNN
Abstract:
Discovering knowledge from data has many applications in various artificial intelligence (AI) systems. Machine learning from the data is a solution to find right information from the high dimensional data. It is thus not a surprise that learning-based approaches emerge in various AI applications. The powerfulness of machine learning was already proven 30 years ago in the boom of neural networks but its successful application to the real world is just in recent years after the deep convolutional neural networks (CNN) have been developed. This is because the machine learning alone can only solve problems in the training data but the system is designed for the unknown data outside of the training set. This gap can be bridged by regularization: human knowledge guidance or interference to the machine learning. This speech will analyze these concepts and ideas from traditional neural networks to the deep CNN and Transformer. It will answer the questions why the traditional neural networks fail to solve real world problems even after 30 years’ intensive research and development and how CNN solves the problems of the traditional neural networks and how Transformer overcomes limitation of CNN and is now very successful in solving various real world AI problems.