AdaMerging: Adaptive Model Merging for Multi-Task Learning

Link
Abstract

Multi-task learning (MTL) aims to empower a model to tackle multiple tasks simultaneously. A recent development known as task arithmetic has revealed that several models, each fine-tuned for distinct tasks, can be directly merged into a single model to execute MTL without necessitating a retraining process using the initial training data. Nevertheless, this direct addition of models often leads to a significant deterioration in the overall performance of the merged model. This decline occurs due to potential conflicts and intricate correlations among the multiple tasks. Consequently, the challenge emerges of how to merge pre-trained models more effectively without using their original training data. This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging). This approach aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data. Specifically, our AdaMerging method operates as an automatic, unsupervised task arithmetic scheme. It leverages entropy minimization on unlabeled test samples from the multi-task setup as a surrogate objective function to iteratively refine the merging coefficients of the multiple models. Our experimental findings across eight tasks demonstrate the efficacy of the AdaMerging scheme we put forth. Compared to the current state-of-the-art (SOTA) task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance. Notably, AdaMerging also exhibits superior generalization capabilities when applied to unseen downstream tasks. Furthermore, it displays a significantly enhanced robustness to data distribution shifts that may occur during the testing phase.

Synth

Problem:: Task Vector 기반 Multi-Task Learning에서 하나의 결합계수 λ로 결합시 성능 저하 문제 발생/결합 계수 λ를 직접 최적화 하기 위해서는 학습 데이터에 접근 할 수 있어야 함

Solution:: 레이어/태스크 별로 다른 결합 계수를 이용/Test-Time Adaptation으로 결합 계수 최적화

Novelty:: Entropy와 Training Loss간의 양의 상관관계를 발견 → Test-Time Adaptation을 위한 대리 목적 함수 Entropy 최적화 제안

Note:: 아마 최초로 레이어 및 태스크별로 다른 계수를 적용한 듯?

Summary

Motivation

Method

file-20250401175900790.png

file-20250401180457346.png

Method 검증

Ablation Study