Rotary Position Embedding, or RoPE, is a type of position embedding which encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding.
Source: RoFormer: Enhanced Transformer with Rotary Position EmbeddingPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 3 | 9.09% |
Code Generation | 2 | 6.06% |
Multiple Choice Question Answering (MCQA) | 2 | 6.06% |
Multi-task Language Understanding | 2 | 6.06% |
Question Answering | 2 | 6.06% |
Arithmetic Reasoning | 1 | 3.03% |
Math Word Problem Solving | 1 | 3.03% |
Document Classification | 1 | 3.03% |
Image Classification | 1 | 3.03% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |