no code implementations • 30 May 2024 • Andreas Koukounas, Georgios Mastrapas, Michael Günther, Bo wang, Scott Martens, Isabelle Mohr, Saba Sturua, Mohammad Kalim Akram, Joan Fontanals Martínez, Saahil Ognawala, Susana Guzman, Maximilian Werk, Nan Wang, Han Xiao
Contrastive Language-Image Pretraining (CLIP) is widely used to train models to align images and texts in a common embedding space by mapping them to fixed-sized vectors.
no code implementations • 26 Feb 2024 • Isabelle Mohr, Markus Krimmel, Saba Sturua, Mohammad Kalim Akram, Andreas Koukounas, Michael Günther, Georgios Mastrapas, Vinit Ravishankar, Joan Fontanals Martínez, Feng Wang, Qi Liu, Ziniu Yu, Jie Fu, Saahil Ognawala, Susana Guzman, Bo wang, Maximilian Werk, Nan Wang, Han Xiao
We introduce a novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language.