2 code implementations • 7 Apr 2024 • Pouya Fallah, Soroush Gooran, Mohammad Jafarinasab, Pouya Sadeghi, Reza Farnia, Amirreza Tarabkhah, Zainab Sadat Taghavi, Hossein Sameti
Language models, particularly generative models, are susceptible to hallucinations, generating outputs that contradict factual knowledge or the source text.
1 code implementation • 3 Apr 2024 • Pouya Sadeghi, Amirhossein Abaskohi, Yadollah Yaghoobzadeh
Inspired by human cognition, Jiang et al.(2023c) create a benchmark for assessing LLMs' lateral thinking-thinking outside the box.
1 code implementation • 3 Apr 2024 • Amirhossein Abaskohi, Sara Baruni, Mostafa Masoudi, Nesa Abbasi, Mohammad Hadi Babalou, Ali Edalat, Sepehr Kamahi, Samin Mahdizadeh Sani, Nikoo Naghavian, Danial Namazifard, Pouya Sadeghi, Yadollah Yaghoobzadeh
This paper explores the efficacy of large language models (LLMs) for Persian.