Yu Deng (邓誉)

Email: t-yudeng[at]microsoft.com       Google Scholar      Github

I am a Joint PhD student of Institute For Advanced Study in Tsinghua University and Microsoft Research Asia(MSRA). I am currently working as a research intern in Visual Computing Group in MSRA, under the supervision of Senior Researcher Jiaolong Yang, Partner Research Manager Xin Tong, and Prof. Harry Shum.

I received B.S. from Department of Physics in Tsinghua University, 2017.

profile photo
Publications
Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence
Yu Deng, Jiaolong Yang, Xin Tong
2021 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021,
[PDF] [Code] [BibTeX]

We propose a novel Deformed Implicit Field (DIF) representation for modeling 3D shapes of a category and generating dense correspondences among shapes with structure variations.

Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive Learning
Yu Deng, Jiaolong Yang, Dong Chen, Fang Wen, Xin Tong
2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020, Oral Presentation
[PDF] [Code] [BibTeX]

We propose DiscoFaceGAN, an approach for face image generation of virtual people with disentangled, precisely-controllable latent representations for identity, expression, pose, and illumination.

Deep 3D Portrait from a Single Image
Sicheng Xu, Jiaolong Yang, Dong Chen, Fang Wen, Yu Deng, Yunde Jia, Xin Tong
2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020,
[PDF] [Code] [BibTeX]

We propose a learning-based approach for recovering the 3D geometry of human head from a single portrait image without any ground-truth 3D data.

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set
Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, Xin Tong
2019 IEEE Conference on Computer Vision and Pattern Recognition Workshop on AMFG, CVPRW 2019, Best Paper Award
[PDF] [Code] [BibTeX]

We propose a novel deep 3D face reconstruction approach that leverages a robust hybrid loss function and performs multi-image face reconstruction by exploiting complementary information from different images for shape aggregation.