GLM is a bilingual (English and Chinese) pre-trained transformer-based language model that follow the traditional architecture of decoder-only autoregressive language modeling. It leverages autoregressive blank infilling as its training objective.
Source: GLM-130B: An Open Bilingual Pre-trained ModelPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 6 | 14.29% |
Quantization | 3 | 7.14% |
Question Answering | 2 | 4.76% |
Denoising | 2 | 4.76% |
Large Language Model | 2 | 4.76% |
Semantic Segmentation | 2 | 4.76% |
Dialogue Generation | 1 | 2.38% |
Chatbot | 1 | 2.38% |
Knowledge Graphs | 1 | 2.38% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |