Knowledge editing aims to subtly inject/edit updated knowledge or adjust undesirable behaviors, while minimizing the impact on unrelated inputs.
Edit Algorithm
: editing method. Choices: [WISE, GRACE, ROME]Edit Steps
: the number of times a layer is trained in the editing method.Edit LR (learning rate)
: the optimization strategy during fine-tuning.Reliability Evaluation
: the assessment of whether the target edit can be accomplished.Generalization Evaluation
: whether generalize to the unseen paraphrase prompt.Locality Evaluation
: the assessment of whether unrelated content has been affected.
Edit Algorithm
10 100
Edit LR (learning rate)
Examples
Edit Prompt | Edit Target New |
---|
Evaluation
Reliability
Generalization
Evaluation Examples
Edit Prompt | Paraphrase Prompt | Answer |
---|
Locality
Unrelated Input Text
- Continuous Knowledge Editing is defined as multiple edits on the same model. We provide all the editing examples (10 in total) in the 'Evaluation Examples' section.
- Note 1: ❗️❗️❗️ In the cold start phase, we have already continuously edited the first 6 examples, so you can proceed directly to the evaluation tests.
- Note 2: The models edited by WISE and GRACE are the same but independent of each other. You need to switch the
Edit Algorithm
at the top for editing/evaluation.
10 100
Edit LR (learning rate)
Examples
Edit Prompt | Edit Target New |
---|
Evaluation
Reliability
Generalization
Evaluation Examples
Edit Prompt | Paraphrase Prompt | Answer |
---|
Locality
Unrelated Input Text
@misc{wang2024easyedit,
title={EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models},
author={Peng Wang and Ningyu Zhang and Bozhong Tian and Zekun Xi and Yunzhi Yao and Ziwen Xu and Mengru Wang and Shengyu Mao and Xiaohan Wang and Siyuan Cheng and Kangwei Liu and Yuansheng Ni and Guozhou Zheng and Huajun Chen},
year={2024},
eprint={2308.07269},
archivePrefix={arXiv},
primaryClass={cs.CL}
}