no code implementations • 8 Mar 2023 • Sen Fang, Yangjian Wu, Bowen Gao, Jingwen Cai, Teik Toe Teoh
Recently, researchers have gradually realized that in some cases, the self-supervised pre-training on large-scale Internet data is better than that of high-quality/manually labeled data sets, and multimodal/large models are better than single or bimodal/small models.