Structured Prompting: Scaling InContext Learning to 1,000 Examples
Abstract
Large language models have exhibited intriguing incontext learning capability, achieving promising zero and fewshot performance without updating the parameters. However, conventional incontext learning is usually restricted by length constraints, rendering it ineffective to absorb supervision from a large number of examples. In order to go beyond few shots, we introduce structured prompting that breaks the length limit and scales incontext learning to thousands of examples. Specifically, demonstration examples are separately encoded with welldesigned position embeddings, and then they are jointly attended by the test example using a rescaled attention mechanism. So we can scale the number of exemplars with linear complexity instead of quadratic complexity with respect to length. Experimental results on a diverse set of tasks show that our approach improves endtask performance and reduces evaluation variance over conventional incontext learning as the number of demonstration examples increases. Code has been released at https://aka.ms/structuredprompting.
 Publication:

arXiv eprints
 Pub Date:
 December 2022
 DOI:
 10.48550/arXiv.2212.06713
 arXiv:
 arXiv:2212.06713
 Bibcode:
 2022arXiv221206713H
 Keywords:

 Computer Science  Computation and Language
 EPrint:
 14 pages