Skip to content
@thu-ml

TSAIL group

Tsinghua Statistical Artificial Intelligence & Learning Group

Pinned Loading

  1. TurboDiffusion TurboDiffusion Public

    TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

    Python 1.9k 116

  2. unidiffuser unidiffuser Public

    Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"

    Python 1.5k 90

  3. SageAttention SageAttention Public

    [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

    Cuda 2.9k 293

  4. prolificdreamer prolificdreamer Public

    ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight)

    Python 1.6k 47

  5. ares ares Public

    A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

    Python 521 93

  6. tianshou tianshou Public

    An elegant PyTorch deep reinforcement learning library.

    Python 9k 1.2k

Repositories

Showing 10 of 86 repositories
  • TurboDiffusion Public

    TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

    thu-ml/TurboDiffusion’s past year of commit activity
    Python 1,899 Apache-2.0 116 26 0 Updated Dec 25, 2025
  • Motus Public

    Official code of Motus: A Unified Latent Action World Model

    thu-ml/Motus’s past year of commit activity
    Python 239 Apache-2.0 4 7 0 Updated Dec 24, 2025
  • SLA Public

    SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention

    thu-ml/SLA’s past year of commit activity
    Python 194 Apache-2.0 9 6 0 Updated Dec 24, 2025
  • vidar-robotwin Public

    robotwin evaluation code for vidar.

    thu-ml/vidar-robotwin’s past year of commit activity
    Python 4 MIT 0 0 0 Updated Dec 22, 2025
  • SageAttention Public

    [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

    thu-ml/SageAttention’s past year of commit activity
    Cuda 2,936 Apache-2.0 293 142 17 Updated Dec 22, 2025
  • vidar Public

    Official repo for vidar and vidarc: video foundation model for robotics.

    thu-ml/vidar’s past year of commit activity
    Python 24 0 0 0 Updated Dec 22, 2025
  • SpargeAttn Public

    [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.

    thu-ml/SpargeAttn’s past year of commit activity
    Cuda 861 Apache-2.0 74 51 3 Updated Dec 17, 2025
  • DiT-Extrapolation Public

    Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers" (ICML 2025) and "UltraViCo: Breaking Extrapolation Limits in Video Diffusion Transformers"

    thu-ml/DiT-Extrapolation’s past year of commit activity
    Python 768 Apache-2.0 73 23 0 Updated Dec 4, 2025
  • thu-ml/ultraimage.github.io’s past year of commit activity
    JavaScript 0 0 0 0 Updated Dec 3, 2025
  • UltraViCo.github.io Public

    Project page for "UltraViCo"

    thu-ml/UltraViCo.github.io’s past year of commit activity
    JavaScript 0 0 0 0 Updated Dec 3, 2025