To address this, we develop AutoPrompt, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Reformulating tasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Īssociation for Computational Linguistics These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning.",ĪutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. Using AutoPrompt, we show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent state-of-the-art supervised models. Publisher = "Association for Computational Linguistics",Ībstract = "The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Cite (Informal): AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts (Shin et al., EMNLP 2020) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Video: Code ucinlp/autopromptĪdditional community code Data GLUE, LAMA, SST, = "rompts",īooktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", Association for Computational Linguistics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235, Online. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. Logan IV, Eric Wallace, and Sameer Singh. Anthology ID: 2020.emnlp-main.346 Volume: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) Month: November Year: 2020 Address: Online Venue: EMNLP SIG: Publisher: Association for Computational Linguistics Note: Pages: 4222–4235 Language: URL: DOI: 10.18653/v1/2020.emnlp-main.346 Bibkey: shin-etal-2020-autoprompt Cite (ACL): Taylor Shin, Yasaman Razeghi, Robert L. ![]() These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning. ![]() Abstract The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining.
0 Comments
Leave a Reply. |