Integrating task specific information into pretrained language models for low resource fine tuning

Rui Wang, Shijing Si, Guoyin Wang, Lei Zhang, Lawrence Carin, Ricardo Henao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

Pretrained Language Models (PLMs) have improved the performance of natural language understanding in recent years. Such models are pretrained on large corpora, which encode the general prior knowledge of natural languages but are agnostic to information characteristic of downstream tasks. This often results in overfitting when fine-tuned with low resource datasets where task-specific information is limited. In this paper, we integrate label information as a task-specific prior into the self-attention component of pretrained BERT models. Experiments on several benchmarks and real-word datasets suggest that the proposed approach can largely improve the performance of pretrained models when fine-tuning with small datasets. The code repository is released in https://github.com/RayWangWR/BERT_label_embedding.
Original languageEnglish (US)
Title of host publicationFindings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020
PublisherAssociation for Computational Linguistics (ACL)
Pages3181-3186
Number of pages6
ISBN (Print)9781952148903
StatePublished - Jan 1 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-02-15

Fingerprint

Dive into the research topics of 'Integrating task specific information into pretrained language models for low resource fine tuning'. Together they form a unique fingerprint.

Cite this