no code implementations • 27 May 2024 • Dixuan Wang, Yanda Li, Junyuan Jiang, Zepeng Ding, Guochao Jiang, Jiaqing Liang, Deqing Yang
Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2. 5-max and so on, thus degrading these LLMs' capabilities.
no code implementations • 8 May 2024 • Guochao Jiang, Zepeng Ding, Yuchen Shi, Deqing Yang
To obtain optimal point entities for prompting LLMs, we also proposed a point entity selection method based on K-Means clustering.
no code implementations • 15 Apr 2024 • Zepeng Ding, Wenhao Huang, Jiaqing Liang, Deqing Yang, Yanghua Xiao
The framework includes an evaluation model that can extract related entity pairs with high precision.