Chunyuan Li

I am a principal researcher at Microsoft Research, Redmond. My recent research focuses on large-scale pre-training in computer vision and natural language processing. Some recent works include:

  • Vision-and-language pre-training [1, 2, 3]
  • Self-supervised visual representation learning [1]
  • Deep generative models at scale [1, 2, 3]

I obtained my PhD in machine learning at Duke University, advised by Prof. Lawrence Carin. My PhD research studies probabilistic deep learning.

news

Apr 21, 2022 Two papers are released [CVPR Tutorial -- CV in the Wild: Knowledge & Benchmark]
  • [K-LITE] demonstrates the effectiveness of external knowledge to improve language-image models (UniCL/CLIP/GLIP) in zero-/few-shot task transfer
  • [ELEVATER] is a platform with 20 image classification and 35 object detection public datasets for evaluating language-image models in task-level visual transfer. [Benchmark Website]
Mar 25, 2022 Upcoming events as a co-organizer:
Mar 22, 2022 FocalNet [paper] [code] - a simple attention-free architecture for vision!
Mar 1, 2022 4 papers accepted by CVPR (1 oral and 3 posters):
Dec 7, 2021 A vision-language approach to visual recognition:
  • [Florence]: A new backbone learner demonstrates the power of unified language-image-label contrast trained on 800M image-text pairs, offering superior performance over CLIP
  • [GLIP]: An object-level language-image model for object detection and phrase grounding
Nov 27, 2021 Our generative model Lafite :champagne: achieves SoTA text-to-image sythesis performance: on par with DALL-E with only 1% model size.

recent publications

  1. K-LITE
    K-LITE: Learning Transferable Visual Models with External Knowledge
    Shen, Sheng*, Li, Chunyuan*, Hu, Xiaowei*, Xie, Yujia, Yang, Jianwei, Zhang, Pengchuan, Rohrbach, Anna, Gan, Zhe, Wang, Lijuan, Yuan, Lu, Liu, Ce, Keutzer, Kurt, Darrell, Trevor, and Gao, Jianfeng
    arXiv preprint 2022
  2. ELEVATER
    ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
    Li, Chunyuan*, Liu, Haotian*, Li, Liunian Harold, Zhang, Pengchuan, Aneja, Jyoti, Yang, Jianwei, Jin, Ping, Lee, Yong Jae, Hu, Houdong, Liu, Zicheng, and Gao, Jianfeng
    arXiv preprint 2022
  3. UniCL
    Unified Contrastive Learning in Image-Text-Label Space
    Yang, Jianwei*, Li, Chunyuan*, Zhang, Pengchuan*, Xiao, Bin*, Liu, Ce, Yuan, Lu, and Gao, Jianfeng
    CVPR 2022