publications

publications by categories in reversed chronological order. generated by jekyll-scholar.

2026

  1. DisCa: Accelerating Video Diffusion Transformers with Distillation-Compatible Learnable Feature Caching
    Chang Zou, Changlin Li, Yang Li, and 7 more authors
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026
    to appear
  2. HiCache: Training-free Acceleration of Diffusion Models via Hermite Polynomial-based Feature Caching
    Liang Feng*, Shikang Zheng*, Jiacheng Liu, and 8 more authors
    In The Fourteenth International Conference on Learning Representations, 2026
  3. Forecast then Calibrate: Feature Caching as ODE for Efficient Diffusion Transformers
    Shikang Zheng*, Liang Feng*, Xinyu Wang, and 8 more authors
    In Proceedings of the AAAI Conference on Artificial Intelligence, 2026
  4. Let Features Decide Their Own Solvers: Hybrid Feature Caching for Diffusion Transformers
    Shikang Zheng, Guantao Chen, Qinming Zhou, and 6 more authors
    In The Fourteenth International Conference on Learning Representations, 2026
    to appear
  5. LESA: Learnable Stage-Aware Predictors for Diffusion Model Acceleration
    Peiliang Cai*, Jiacheng Liu*, Haowen Xu, and 3 more authors
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026
    to appear
  6. From Sketch to Fresco: Efficient Diffusion Transformer with Progressive Resolution
    Shikang Zheng, Guantao Chen, Lixuan He, and 4 more authors
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026
    to appear

2025

  1. HunyuanVideo 1.5 Technical Report
    Bing Wu, Chang Zou, Changlin Li, and 78 more authors
    Tencent Hunyuan Foundation Model Team, As Core Contributors, arXiv preprint arXiv:2511.18870, 2025
  2. Accelerating Diffusion Transformers with Token-wise Feature Caching
    Chang Zou*, Xuyang Liu*, Ting Liu, and 2 more authors
    In The Thirteenth International Conference on Learning Representations, 2025
  3. From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers
    Jiacheng Liu*, Chang Zou*, Yuanhuiyi Lyu, and 2 more authors
    In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025
  4. SpeCa: Accelerating Diffusion Transformers with Speculative Feature Caching
    Jiacheng Liu*, Chang Zou*, Yuanhuiyi Lyu, and 3 more authors
    In Proceedings of the 33rd ACM International Conference on Multimedia, 2025
  5. Compute Only 16 Tokens in One Timestep: Accelerating Diffusion Transformers with Cluster-Driven Feature Caching
    Zhixin Zheng*, Xinyu Wang*, Chang Zou, and 2 more authors
    In Proceedings of the 33rd ACM International Conference on Multimedia, 2025
  6. dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching
    Zhiyuan Liu, Yicun Yang, Yaojie Zhang, and 5 more authors
    arXiv preprint arXiv:2506.06295, 2025
  7. EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models
    Yantai Yang*, Yuhao Wang*, Zichen Wen, and 5 more authors
    In Advances in Neural Information Processing Systems, 2025
  8. EEdit: Rethinking the Spatial and Temporal Redundancy for Efficient Image Editing
    Zexuan Yan*, Yue Ma*, Chang Zou, and 3 more authors
    In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025
  9. Shifting AI Efficiency From Model-Centric to Data-Centric Compression
    Xuyang Liu*, Zichen Wen*, Shaobo Wang*, and 13 more authors
    arXiv preprint arXiv:2505.19147, 2025
  10. Token Pruning for Caching Better: 9 Times Acceleration on Stable Diffusion for Free
    Evelyn Zhang*, Bang Xiao*, Jiayi Tang, and 5 more authors
    arXiv preprint arXiv:2501.00375, 2025
  11. A Survey on Cache Methods in Diffusion Models: Toward Efficient Multi-Modal Generation
    Jiacheng Liu*, Xinyu Wang*, Yuqi Lin, and 10 more authors
    arXiv preprint arXiv:2510.19755, 2025
  12. FreqCa: Accelerating Diffusion Models via Frequency-Aware Caching
    Jiacheng Liu*, Peiliang Cai*, Qinming Zhou, and 9 more authors
    arXiv preprint arXiv:2510.08669, 2025

2024

  1. Accelerating Diffusion Transformers with Dual Feature Caching
    Chang Zou*, Shikang Zheng*, Evelyn Zhang, and 5 more authors
    arXiv preprint arXiv:2412.18911, 2024