Chang Zou

Phd Student @School of AI, Shanghai JiaoTong University

prof_pic.jpg

Email: shenyizou@gmail.com

Wechat: TBCXzc

ICLR,ICCV,CVPR,MM

Current: Shenzhen,Guangdong

Hometown: Chengdu,Sichuan

Google Scholar

About Me

Hi, I’m Chang Zou. I am currently an undergraduate at Yingcai Honors College, UESTC, where I will receive my B.Sc. in Artificial Intelligence in June 2026. Following my graduation, I will begin my Ph.D. journey in September 2026 at the School of AI, Shanghai Jiao Tong University (SJTU), under the supervision of Prof. Linfeng Zhang.

My research primarily focuses on Precise and Efficient AIGC. From 2024 to 2026, my work centered on inference acceleration for diffusion models, where I achieved significant milestones in speeding up image and video generation (you may recognize my work through the TaylorSeer project). Since 2026, I have expanded my exploration into agentic video generation, world models, and unified native multi-modal LLMs that bridge generation and understanding.

I maintain a critical yet open-minded attitude toward academic research and discussions. I am always eager to connect, feel free to reach out! By the way, I am currently seeking internship opportunities in related fields and welcome any inquiries.

Experience

Ph.D. Student | EPIC Lab @ SAI, Shanghai Jiao Tong University Start @ Sept 2026

  • Incoming doctoral candidate focusing on next-generation generative models.

Research Intern (Qingyun Program) | Foundation Model Team @ Tencent Hunyuan 2025 – Present

  • Conducting foundation model research on hyer-computation clusters.
  • Focusing on acceleration techniques such as caching and distillation.
  • Contributing as a Core Member to projects including HunyuanVideo 1.5.

Research Intern | EPIC Lab @ SAI, Shanghai Jiao Tong University 2024 – 2026

Undergraduate | Yingcai Honors College, UESTC 2022 – 2026

  • Major: Mathematics-Physics Fundamental Science (Yingcai Honors Program of UESTC), Direction of Artificial Intelligence.

selected publications

  1. HunyuanVideo 1.5 Technical Report
    Bing Wu, Chang Zou, Changlin Li, and 78 more authors
    Tencent Hunyuan Foundation Model Team, As Core Contributors, arXiv preprint arXiv:2511.18870, 2025
  2. Accelerating Diffusion Transformers with Token-wise Feature Caching
    Chang Zou*, Xuyang Liu*, Ting Liu, and 2 more authors
    In The Thirteenth International Conference on Learning Representations, 2025
  3. From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers
    Jiacheng Liu*, Chang Zou*, Yuanhuiyi Lyu, and 2 more authors
    In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025
  4. SpeCa: Accelerating Diffusion Transformers with Speculative Feature Caching
    Jiacheng Liu*, Chang Zou*, Yuanhuiyi Lyu, and 3 more authors
    In Proceedings of the 33rd ACM International Conference on Multimedia, 2025
  5. DisCa: Accelerating Video Diffusion Transformers with Distillation-Compatible Learnable Feature Caching
    Chang Zou, Changlin Li, Yang Li, and 7 more authors
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026
    to appear