Representation Surgery for Multi-Task Model Merging

Link
Abstract

Multi-task learning (MTL) compresses the information from multiple tasks into a unified backbone to improve computational efficiency and generalization. Recent work directly merges multiple independently trained models to perform MTL instead of collecting their raw data for joint training, greatly expanding the application scenarios of MTL. However, by visualizing the representation distribution of existing model merging schemes, we find that the merged model often suffers from the dilemma of representation bias. That is, there is a significant discrepancy in the representation distribution between the merged and individual models, resulting in poor performance of merged MTL. In this paper, we propose a representation surgery solution called “Surgery" to reduce representation bias in the merged model. Specifically, Surgery is a lightweight task-specific plugin that takes the representation of the merged model as input and attempts to output the biases contained in the representation from the merged model. We then designed an unsupervised optimization objective that updates the Surgery plugin by minimizing the distance between the merged model’s representation and the individual model’s representation. Extensive experiments demonstrate significant MTL performance improvements when our Surgery plugin is applied to state-of-the-art (SOTA) model merging schemes.

Synth

Problem:: 기존 Model Merging 방식들은 병합된 모델과 개별 모델 간 Representation 분포에 차이가 발생하는 Representation Bias 문제를 겪음

Solution:: 병합된 모델의 Representation을 입력받아 Bias를 제거하는 Task-Specific 경량 모듈 추가 / 병합 모델 Representation과 개별 모델 Representation 간 거리 최소화를 통한 Target Task의 데이터 없이 학습

Novelty:: Model Merging의 주요 문제점으로 Representation Bias를 최초로 식별 및 분석 / 기존 Weight-Space Merging 방식들과 달리 Post-Merging 단계에서 Representation-Space 문제를 해결하는 Orthogonal 접근법 제시 / Labeled Data 없이 Unlabeled Test Data와 개별 모델만으로 Bias 제거 모듈을 학습시키는 Unsupervised 방식 설계

Note:: 본인들의 방식을 다양한 분야 (CV, NLP), 다양한 데이터 셋, 다양한 모델, 다양한 방법론에 적용해서 모두 효과적임을 보임

Summary

Motivation

Method

file-20250502160003711.png

Method 검증