Skip to content

Official code: Debiasing Large Language Models via Adaptive Causal Prompting with Sketch-of-Thought

License

Notifications You must be signed in to change notification settings

fairintelligence/acps

Repository files navigation

Adaptive Causal Prompting with Sketch-of-Thought (ACPS)

Causality diagram


TL;DR

ACPS adaptively routes between standard and conditional front-door adjustments and uses concise Sketch-of-Thought mediators to deliver robust, token-efficient reasoning across diverse tasks.

Repository Map

  • acps/ — Task-specific ACPS notebooks (CommonsenseQA, FEVER, HotpotQA, GSM8K, Math, StrategyQA, MusiQue).
  • acps.ipynb — End-to-end ACPS pipeline (routing, mediator construction, evaluation).
  • helpers/ — Sketch-of-Thought utilities, encoder fine-tuning, prompt templates, metrics collection.
  • sots_datasets/ — Builders for Sketch-of-Thought datasets per benchmark.
  • efficiency_comparison/ — Efficiency experiments and analysis.
  • robustness_study/ — Robustness evaluations on shuffled and injected datasets.
  • img/ — Figures for documentation and the project page.
  • requirements.txt — Full dependency list (matches Kaggle Python 3.11.13 environment).
  • CITATION.cff — Citation metadata.

Environment & Usage

  • Verified in a Kaggle Python 3.11.13 GPU container. The dependency list is heavy; trim it if you only need specific notebooks.
  • Quickstart:
    1. Open a Kaggle notebook (GPU recommended) and set Python 3.11.13.
    2. Install dependencies: pip install -r requirements.txt (or install selectively for the target notebook).
    3. Run acps.ipynb for the full pipeline, or a task notebook under acps/ for a specific benchmark.
    4. Use sots_datasets/ notebooks to regenerate Sketch-of-Thought mediators; use robustness_study/ and efficiency_comparison/ for robustness and efficiency analyses.
  • Outputs are notebook-driven; no standalone Python package is provided yet.

Citation

Use CITATION.cff or the BibTeX below.

@inproceedings{li2026acps,
  title     = {Debiasing Large Language Models via Adaptive Causal Prompting with Sketch-of-Thought},
  author    = {Bowen Li and Ziqi Xu and Jing Ren and Renqiang Luo and Xikun Zhang and Xiuzhen Zhang and Yongli Ren and Feng Xia},
  booktitle = {Findings of the Association for Computational Linguistics: EACL 2026},
  year      = {2026},
  month     = {January},
  note      = {OpenReview id: SdTSZ5GfV0}
}

License

MIT License for code; paper content under CC BY 4.0.

About

Official code: Debiasing Large Language Models via Adaptive Causal Prompting with Sketch-of-Thought

Topics

Resources

License

Stars

Watchers

Forks