Chang Zou
Phd Student @School of AI, Shanghai JiaoTong University
Email: shenyizou@gmail.com
Wechat: TBCXzc
ICLR,ICCV,CVPR,MM
Current: Shenzhen,Guangdong
Hometown: Chengdu,Sichuan
Google ScholarAbout Me
Hi, I’m Chang Zou. I am currently an undergraduate at Yingcai Honors College, UESTC, where I will receive my B.Sc. in Artificial Intelligence in June 2026. Following my graduation, I will begin my Ph.D. journey in September 2026 at the School of AI, Shanghai Jiao Tong University (SJTU), under the supervision of Prof. Linfeng Zhang.
My research primarily focuses on Precise and Efficient AIGC. From 2024 to 2026, my work centered on inference acceleration for diffusion models, where I achieved significant milestones in speeding up image and video generation (you may recognize my work through the TaylorSeer project). Since 2026, I have expanded my exploration into agentic video generation, world models, and unified native multi-modal LLMs that bridge generation and understanding.
I maintain a critical yet open-minded attitude toward academic research and discussions. I am always eager to connect, feel free to reach out! By the way, I am currently seeking internship opportunities in related fields and welcome any inquiries.
Experience
Ph.D. Student | EPIC Lab @ SAI, Shanghai Jiao Tong University Start @ Sept 2026
- Incoming doctoral candidate focusing on next-generation generative models.
Research Intern (Qingyun Program) | Foundation Model Team @ Tencent Hunyuan 2025 – Present
- Conducting foundation model research on hyer-computation clusters.
- Focusing on acceleration techniques such as caching and distillation.
- Contributing as a Core Member to projects including HunyuanVideo 1.5.
Research Intern | EPIC Lab @ SAI, Shanghai Jiao Tong University 2024 – 2026
- Advised by Prof. Linfeng Zhang, focusing on Efficient AIGC research.
Undergraduate | Yingcai Honors College, UESTC 2022 – 2026
- Major: Mathematics-Physics Fundamental Science (Yingcai Honors Program of UESTC), Direction of Artificial Intelligence.
selected publications
- HunyuanVideo 1.5 Technical ReportTencent Hunyuan Foundation Model Team, As Core Contributors, arXiv preprint arXiv:2511.18870, 2025
- Accelerating Diffusion Transformers with Token-wise Feature CachingIn The Thirteenth International Conference on Learning Representations, 2025
- From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersIn Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025
- DisCa: Accelerating Video Diffusion Transformers with Distillation-Compatible Learnable Feature CachingIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026to appear