no code implementations • WMT (EMNLP) 2020 • Tanfang Chen, Weiwei Wang, Wenyang Wei, Xing Shi, Xiangang Li, Jieping Ye, Kevin Knight
This paper describes the DiDi AI Labs’ submission to the WMT2020 news translation shared task.
no code implementations • 30 May 2024 • Jiatong Li, Renjun Hu, Kunzhe Huang, Yan Zhuang, Qi Liu, Mengxiao Zhu, Xing Shi, Wei Lin
To rectify this, we present PertEval, a toolkit devised for in-depth probing of LLMs' knowledge capacity through knowledge-invariant perturbations.
1 code implementation • 29 May 2024 • Jiaqi Xu, Xinyi Zou, Kunzhe Huang, Yunkuo Chen, Bo Liu, Mengli Cheng, Xing Shi, Jun Huang
The motion module can be adapted to various DiT baseline methods to generate video with different styles.
no code implementations • 17 Feb 2024 • Yu Feng, Xing Shi, Mengli Cheng, Yun Xiong
As the task of 2D-to-3D reconstruction has gained significant attention in various real-world scenarios, it becomes crucial to be able to generate high-quality point clouds.
1 code implementation • 4 Feb 2024 • Yi Cheng, Renjun Hu, Haochao Ying, Xing Shi, Jian Wu, Wei Lin
Our extensive experiments on real-world data also validate the consistent effectiveness, efficiency, and rationale of AMFormer, suggesting it has established a strong inductive bias for deep learning on tabular data.
2 code implementations • 7 Oct 2023 • Ziheng Wu, Jiaqi Xu, Xinyi Zou, Kunzhe Huang, Xing Shi, Jun Huang
By training a digital doppelganger of a specific user ID using 5 to 20 relevant images, the finetuned model (according to the trained LoRA model) allows for the generation of AI photos using arbitrary templates.
no code implementations • 10 Mar 2023 • Jiaqi Xu, Bo Liu, Yunkuo Chen, Mengli Cheng, Xing Shi
Specifically, we design a Text-Guided MultiWay-Sampler based on adapt-pooling residual mapping and self-attention modules to sample long sequences and fuse multi-modal features, which reduces the computational costs and addresses performance degradation caused by previous samplers.
Ranked #1 on TGIF-Transition on TGIF-QA (using extra training data)
no code implementations • 16 Nov 2022 • Yunji Li, Sujian Li, Xing Shi
In this paper, we propose the task of consecutive question generation (CQG), which generates a set of logically related question-answer pairs to understand a whole passage, with a comprehensive consideration of the aspects including accuracy, coverage, and informativeness.
no code implementations • 24 Dec 2020 • Xing Shi, Yijun Xiao, Kevin Knight
Using different EoS types in target sentences of different lengths exposes and eliminates this implicit smoothing.
no code implementations • 16 Oct 2020 • Tanfang Chen, Weiwei Wang, Wenyang Wei, Xing Shi, Xiangang Li, Jieping Ye, Kevin Knight
This paper describes DiDi AI Labs' submission to the WMT2020 news translation shared task.
1 code implementation • 9 Oct 2020 • Arkady Arkhangorodsky, Amittai Axelrod, Christopher Chu, Scot Fang, Yiqi Huang, Ajay Nagesh, Xing Shi, Boliang Zhang, Kevin Knight
We create a new task-oriented dialog platform (MEEP) where agents are given considerable freedom in terms of utterances and API calls, but are constrained to work within a push-button environment.
1 code implementation • 9 Sep 2020 • Mengli Cheng, Minghui Qiu, Xing Shi, Jun Huang, Wei. Lin
Existing learning based methods for text labeling task usually require a large amount of labeled examples to train a specific model for each type of document.
no code implementations • WS 2020 • Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang
The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.
no code implementations • 2 Jun 2018 • Xing Shi, Shizhen Xu, Kevin Knight
We present a GPU-based Locality Sensitive Hashing (LSH) algorithm to speed up beam search for sequence models.
no code implementations • 28 May 2018 • Kuan Liu, Xing Shi, Prem Natarajan
Our ablation experiments demonstrate the effectiveness of the two components to address heterogeneous attribute challenges including variable lengths and attribute sparseness.
no code implementations • ACL 2017 • Xing Shi, Kevin Knight
Compared with Locality Sensitive Hashing (LSH), decoding with word alignments is GPU-friendly, orthogonal to existing speedup methods and more robust across language pairs.
1 code implementation • 11 Aug 2016 • Kuan Liu, Xing Shi, Anoop Kumar, Linhong Zhu, Prem Natarajan
We present our solution to the job recommendation task for RecSys Challenge 2016.