Our three papers “How Training Data Shapes the Use of Parametric and In-Context Knowledge in Language Models”, “Reliability-Aware Adaptive Self-Consistency for Efficient Sampling in LLM Reasoning”, and “Rethinking Post-Unlearning Behavior of Large Vision-Language Models” have been accepted to ACL 2026.

Updated: