Fast Model Editing at Scale
Eric Mitchell
and
Charles Lin
and
Antoine Bosselut
and
Chelsea Finn
and
Christopher D. Manning
arXiv e-Print archive - 2021 via Local arXiv
Keywords:
cs.LG, cs.AI, cs.CL
First published: 2024/11/21 (just now) Abstract: While large pre-trained models have enabled impressive results on a variety
of downstream tasks, the largest existing models still make errors, and even
accurate predictions may become outdated over time. Because detecting all such
failures at training time is impossible, enabling both developers and end users
of such models to correct inaccurate outputs while leaving the model otherwise
intact is desirable. However, the distributed, black-box nature of the
representations learned by large neural networks makes producing such targeted
edits difficult. If presented with only a single problematic input and new
desired output, fine-tuning approaches tend to overfit; other editing
algorithms are either computationally infeasible or simply ineffective when
applied to very large models. To enable easy post-hoc editing at scale, we
propose Model Editor Networks using Gradient Decomposition (MEND), a collection
of small auxiliary editing networks that use a single desired input-output pair
to make fast, local edits to a pre-trained model's behavior. MEND learns to
transform the gradient obtained by standard fine-tuning, using a low-rank
decomposition of the gradient to make the parameterization of this
transformation tractable. MEND can be trained on a single GPU in less than a
day even for 10 billion+ parameter models; once trained MEND enables rapid
application of new edits to the pre-trained model. Our experiments with T5,
GPT, BERT, and BART models show that MEND is the only approach to model editing
that effectively edits the behavior of models with more than 10 billion
parameters. Code and data available at
https://sites.google.com/view/mend-editing.
The goal of this work is to edit the model’s weights given new edit pairs ($x_e, y_e$) at test time. They achieve this by learning a "model editor network" that takes a fine tuning gradient computed from ($x_e, y_e$) and transforms this into a weight update.
$$ f(\nabla W_l) \rightarrow \tilde\nabla W_l$$
The editor network is parameterized by the layer that it is predicting using a FiLM style scale and shift.
The editor network is trained on a small set of examples ($D^{tr}_{edit}$). The paper states that this dataset contains edits that are similar to the "the types of edits that will be made." which is interesting because it introduces generalization limitations to the potential edits.
An extra loss term is used to prevent unintended changes for other inputs to the model (called $x_{loc}$). This is achieved with the following loss that will maintain the predictions to be the same value.
$$L_{loc} = KL(p_{\theta_W}(\cdot | x_{loc}) \| p_{\theta_\tilde{W}}(\cdot | x_{loc}))$$
Some intuition for why this works is editor network $f$ approximates full dataset gradient from just a single example so it is more efficient. It can reduce the change of elements of the weight matrix which were disruptive to the loss when it was trained, information that requires many training examples to uncover.