Knowledge Composition using Task Vectors with Learned Anisotropic Scaling

Link
Abstract

Pre-trained models produce strong generic representations that can be adapted via fine-tuning on specialised datasets. The learned weight difference relative to the pre-trained model, known as a task vector, characterises the direction and stride of fine-tuning that enables the model to capture these specialised representations. The significance of task vectors is such that simple arithmetic operations on them can be used to combine diverse representations from different domains. This paper builds on these properties of task vectors and aims to answer (1) whether components of task vectors, particularly parameter blocks, exhibit similar characteristics, and (2) how such blocks can be used to enhance knowledge composition and transfer. To this end, we introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level. We show that such linear combinations explicitly exploit the low intrinsic dimensionality of pre-trained models, with only a few coefficients being the learnable parameters. Furthermore, composition of parameter blocks enables modular learning that effectively leverages the already learned representations, thereby reducing the dependency on large amounts of data. We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives. In particular, we show that (1) learned anisotropic scaling allows task vectors to be more disentangled, causing less interference in composition; (2) task vector composition excels with scarce or no labelled data and is less prone to domain shift, thus leading to better generalisability; (3) mixing the most informative parameter blocks across different task vectors prior to training can reduce the memory footprint and improve the flexibility of knowledge transfer. Moreover, we show the potential of aTLAS as a parameter-efficient fine-tuning method, particularly with less data, and demonstrate that it can be easily scaled up for higher performance.

Synth

Problem:: 기존 Task Vector가 구조적 중요도를 고려하지 않음

Solution:: 중요한 Layer마다 다른 가중치를 두어 Task Vector를 이용

Novelty:: Intrinsic Dimensionality와의 관게를 나타내어 이론적 배경을 보이고, 이를 실험적으로 증명/Weight가 정보 전달의 주 요소임을 보임/각 층에 따른 중요도를 보임

Note:: 기본 아이디어는 기존 Task Vector에서 크게 달라진 건 없음. 더하는 방식만 개선해서 간단해 보이지만 3.2절 작성을 잘한듯

Summary

Motivation

Method

file-20250326012419809.png

Base

aTLAS

Λ=[λ(1)I(1)00λ(m)I(m)],Λiτi=(λi(1)τi(1),,λi(m)τi(m))

Intrinsic Dimensionality

file-20250326013644765.png

실제로 더 효과적임

Method 검증

Task Arithmetic

Few-Shot Recognition

Test-Time Adaptation

Parameter-Efficient Fine-Tuning