Skip to content

CGCL-codes/Thorki

Repository files navigation

Introduction

This is the official implementation of our paper: Thorki: Decoupling General and Personalized Knowledge with Collaborative Fusion for Personalized Federated Learning (accepted by WWW 2026)

📌 Overview

frame

Personalized Federated Learning (PFL) aims to learn models that can adapt to client-specific data distributions while still benefiting from collaboration across clients. However, existing methods often couple general knowledge (shared across clients) and personalized knowledge (client-specific) across model layers, leading to suboptimal personalization or limited generalization.

Thorki proposes a novel framework that:

  • Explicitly decouples general and personalized knowledge ,
  • Enables collaborative fusion between them during inference,
  • Achieves strong performance across heterogeneous clients.

This repository provides the official implementation of Thorki, along with several representative federated learning and personalized federated learning baselines for comparison.

📂 Project Structure

.
├── config/ # YAML experiment configurations (datasets, models, algorithms, hyperparameters)
├── data/ # Data loading, preprocessing, and client partitioning utilities
├── model/ # Backbone networks and algorithm-specific model components
├── trainer/ # Federated and personalized learning algorithms (server–client based)
├── torch_main.py # Entry point for running experiments
├── requirements.txt # Dependency list
└── README.md

⚙️ Requirements

Install dependencies:

pip install -r requirements.txt

🚀 Running Experiments

1️⃣ Configure the Experiment

All hyperparameters are specified via YAML files under config/. Each configuration file corresponds to a specific dataset–model pair.

Example:

  • config/config_cifar10_vit.yaml
  • config/config_20news.yaml

2️⃣ Run Training

Edit the following lines in torch_main.py:

dataset = 'cifar10'
model = 'vit'

run(dataset, model)

Then run:

python torch_main.py

The training pipeline will:

  1. Load the corresponding YAML configuration
  2. Parse arguments automatically
  3. Initialize the selected federated learning algorithm
  4. Start global federated training

📎 Citation

If you find this code useful, please cite our paper:

@inproceedings{thorki2026,
  title={Decoupling General and Personalized Knowledge with Collaborative Fusion for Personalized Federated Learning},
  author={...},
  booktitle={Proceedings of the Web Conference (WWW)},
  year={2026}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages