no code implementations • IWSLT (ACL) 2022 • Brian Yan, Patrick Fernandes, Siddharth Dalmia, Jiatong Shi, Yifan Peng, Dan Berrebbi, Xinyi Wang, Graham Neubig, Shinji Watanabe
We use additional paired Modern Standard Arabic data (MSA) to directly improve the speech recognition (ASR) and machine translation (MT) components of our cascaded systems.
no code implementations • 29 May 2024 • Jiachen Li, Weixi Feng, Tsu-Jui Fu, Xinyi Wang, Sugato Basu, Wenhu Chen, William Yang Wang
In this work, we aim to break the quality bottleneck of a video consistency model (VCM) to achieve $\textbf{both fast and high-quality video generation}$.
no code implementations • 21 May 2024 • Xinyi Wang, Grazziela Figueredo, Ruizhe Li, Wei Emma Zhang, Weitong Chen, Xin Chen
The aim is to provide comprehensive and rich information for researchers interested in automatic clinical report generation and medical image analysis, especially when using multimodal inputs, and assist them in developing new algorithms to advance the field.
no code implementations • 17 May 2024 • Zesong Fei, Shuntian Tang, Xinyi Wang, Fanghao Xia, Fan Liu, J. Andrew Zhang
Integrated sensing and communication (ISAC) is regarded as a promising technique for 6G communication network.
no code implementations • 11 Mar 2024 • Lang Tong, Xinyi Wang, Qing Zhao
Purpose This article presents a case for a next-generation grid monitoring and control system, leveraging recent advances in generative artificial intelligence (AI), machine learning, and statistical inference.
no code implementations • 9 Mar 2024 • Xinyi Wang, Qing Zhao, Lang Tong
This paper presents a generative artificial intelligence approach to probabilistic forecasting of electricity market signals, such as real-time locational marginal prices and area control error signals.
no code implementations • 27 Feb 2024 • Chu-Cheng Lin, Xinyi Wang, Jonathan H. Clark, Han Lu, Yun Zhu, Chenxi Whitehouse, Hongkun Yu
By composing feature-specific parameters for each dataset, FLix can accommodate diverse dataset mixtures and generalize better to unseen datasets.
1 code implementation • 26 Feb 2024 • Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, William Yang Wang
A major factor in the recent success of large language models is the use of enormous and ever-growing text datasets for unsupervised pre-training.
no code implementations • 21 Feb 2024 • Xinyi Wang, Lang Tong, Qing Zhao
Generative probabilistic forecasting produces future time series samples according to the conditional probability distribution given past time series observations.
no code implementations • 10 Feb 2024 • Angeliki Katsenou, Xinyi Wang, Daniel Schien, David Bull
Adaptive video streaming is a key enabler for optimising the delivery of offline encoded video content.
1 code implementation • 5 Feb 2024 • Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan, Wenhu Chen, William Yang Wang
To understand how pre-training with a next-token prediction objective contributes to the emergence of such reasoning capability, we propose that we can view an LM as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
no code implementations • 24 Jan 2024 • Iain Xie Weissburg, Mehir Arora, Xinyi Wang, Liangming Pan, William Yang Wang
As the number of accepted papers at AI and ML conferences reaches into the thousands, it has become unclear how researchers access and read research publications.
1 code implementation • 11 Jan 2024 • Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Kim-Kwang Raymond Choo, Jun Shen, Dong Yuan
With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models.
1 code implementation • 21 Dec 2023 • Zhiyu Zhu, Huaming Chen, Jiayu Zhang, Xinyi Wang, Zhibo Jin, Minhui Xue, Dongxiao Zhu, Kim-Kwang Raymond Choo
To better understand the output of deep neural networks (DNN), attribution based methods have been an important approach for model interpretability, which assign a score for each input dimension to indicate its importance towards the model outcome.
no code implementations • 19 Dec 2023 • Angeliki Katsenou, Xinyi Wang, Daniel Schien, David Bull
The environmental impact of video streaming services has been discussed as part of the strategies towards sustainable information and communication technologies.
no code implementations • 18 Dec 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
Through extensive experiments involving language and multi-modal models on semantic understanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
no code implementations • 20 Nov 2023 • Zhaohui Wang, Sufang Zhang, Jianteng Peng, Xinyi Wang, Yandong Guo
Therefore, this paper proposes a Multi-task gEnerative mask dEcoupling face Recognition (MEER) network to jointly handle these two tasks, which can learn occlusionirrelevant and identity-related representation while achieving unmasked face synthesis.
no code implementations • 15 Nov 2023 • Yun Zhu, Nevan Wichers, Chu-Cheng Lin, Xinyi Wang, Tianlong Chen, Lei Shu, Han Lu, Canoee Liu, Liangchen Luo, Jindong Chen, Lei Meng
Parameter Efficient Tuning has been an prominent approach to adapt the Large Language Model to downstream tasks.
no code implementations • 15 Nov 2023 • Alexandra Chronopoulou, Jonas Pfeiffer, Joshua Maynez, Xinyi Wang, Sebastian Ruder, Priyanka Agrawal
Parameter-efficient fine-tuning (PEFT) using labeled task data can significantly improve the performance of large language models (LLMs) on the downstream task.
1 code implementation • 24 Oct 2023 • Zitao Wang, Xinyi Wang, Wei Hu
We study continual event extraction, which aims to extract incessantly emerging event information while avoiding forgetting.
1 code implementation • 16 Oct 2023 • Zhibo Jin, Zhiyu Zhu, Xinyi Wang, Jiayu Zhang, Jun Shen, Huaming Chen
While deep neural networks have excellent results in many fields, they are susceptible to interference from attacking samples resulting in erroneous judgments.
no code implementations • 9 Oct 2023 • Xinyi Wang, Lucas Caccia, Oleksiy Ostapenko, Xingdi Yuan, William Yang Wang, Alessandro Sordoni
Large language models (LLMs) have recently attracted considerable interest for their ability to perform complex reasoning tasks, such as chain-of-thought reasoning.
no code implementations • 27 Sep 2023 • Xuanlong Yu, Yi Zuo, Zitao Wang, Xiaowen Zhang, Jiaxuan Zhao, Yuting Yang, Licheng Jiao, Rui Peng, Xinyi Wang, Junpei Zhang, Kexin Zhang, Fang Liu, Roberto Alcover-Couso, Juan C. SanMiguel, Marcos Escudero-Viñolo, Hanlin Tian, Kenta Matsui, Tianhao Wang, Fahmy Adan, Zhitong Gao, Xuming He, Quentin Bouniot, Hossein Moghaddam, Shyam Nandan Rai, Fabio Cermelli, Carlo Masone, Andrea Pilzer, Elisa Ricci, Andrei Bursuc, Arno Solin, Martin Trapp, Rui Li, Angela Yao, Wenlong Chen, Ivor Simpson, Neill D. F. Campbell, Gianni Franchi
This paper outlines the winning solutions employed in addressing the MUAD uncertainty quantification challenge held at ICCV 2023.
no code implementations • 9 Sep 2023 • Xinyi Wang, John Wieting, Jonathan H. Clark
Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning.
no code implementations • 27 Aug 2023 • Xinyi Wang, Xuan Cui, Danxu Li, Fang Liu, Licheng Jiao
Drones have been widely used in many areas of our daily lives.
no code implementations • 13 Aug 2023 • Xinyi Wang, Angeliki Katsenou, David Bull
Preliminary results indicate that high correlations are achieved by using only deep features while adding saliency is not always boosting the performance.
1 code implementation • 6 Aug 2023 • Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, William Yang Wang
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
no code implementations • 5 Jun 2023 • Xinyi Wang, Meijen Lee, Qing Zhao, Lang Tong
Probabilistic time series forecasting predicts the conditional probability distributions of the time series at a future time given past realizations.
no code implementations • 23 May 2023 • Jonas Pfeiffer, Francesco Piccinno, Massimo Nicosia, Xinyi Wang, Machel Reid, Sebastian Ruder
Multilingual sequence-to-sequence models perform poorly with increased language coverage and fail to consistently generate text in the correct target language in few-shot settings.
no code implementations • 23 May 2023 • Benjamin Muller, John Wieting, Jonathan H. Clark, Tom Kwiatkowski, Sebastian Ruder, Livio Baldini Soares, Roee Aharoni, Jonathan Herzig, Xinyi Wang
Based on these models, we improve the attribution level of a cross-lingual question-answering system.
1 code implementation • 21 May 2023 • Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, Tony Xia
We evaluate a wide spectrum of 16 large language and code models with different prompting strategies like Chain-of-Thoughts and Program-of-Thoughts.
Ranked #1 on Natural Questions on TheoremQA
1 code implementation • 20 May 2023 • Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang
We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations.
1 code implementation • 19 May 2023 • Sebastian Ruder, Jonathan H. Clark, Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel A. Sarr, Xinyi Wang, John Wieting, Nitish Gupta, Anna Katanova, Christo Kirov, Dana L. Dickinson, Brian Roark, Bidisha Samanta, Connie Tao, David I. Adelani, Vera Axelrod, Isaac Caswell, Colin Cherry, Dan Garrette, Reeve Ingle, Melvin Johnson, Dmitry Panteleev, Partha Talukdar
We evaluate commonly used models on the benchmark.
no code implementations • 18 May 2023 • Wanrong Zhu, Xinyi Wang, Yujie Lu, Tsu-Jui Fu, Xin Eric Wang, Miguel Eckstein, William Yang Wang
We conduct a series of experiments to compare the common edits made by humans and GPT-k, evaluate the performance of GPT-k in prompting T2I, and examine factors that may influence this process.
1 code implementation • 11 May 2023 • Xinyi Wang, Zitao Wang, Wei Hu
Continual few-shot relation extraction (RE) aims to continuously train a model for new relations with few labeled training data, of which the major challenges are the catastrophic forgetting of old relations and the overfitting caused by data sparsity.
no code implementations • 3 May 2023 • Lora Bailey, Heather Smith Blake, Garner Cochran, Nathan Fox, Michael Levet, Reem Mahmoud, Elizabeth Matson, Inne Singgih, Grace Stadnyk, Xinyi Wang, Alexander Wiedemann
In this paper, we examine the computational complexity of enumeration in certain genome rearrangement models.
no code implementations • 13 Feb 2023 • Zesong Fei, Xinyi Wang, Nan Wu, Jingxuan Huang, J. Andrew Zhang
The air-ground integrated sensing and communications (AG-ISAC) network, which consists of unmanned aerial vehicles (UAVs) and ground terrestrial networks, offers unique capabilities and demands special design techniques.
1 code implementation • NeurIPS 2023 • Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark Steyvers, William Yang Wang
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
no code implementations • 26 Dec 2022 • Xinyi Wang, Jianteng Peng, Sufang Zhang, Bihui Chen, Yi Wang, Yandong Guo
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks.
2 code implementations • 22 Nov 2022 • Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen
By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets.
no code implementations • 31 Oct 2022 • Zihao Tang, Xinyi Wang, Lihaowen Zhu, Mariano Cabezas, Dongnan Liu, Michael Barnett, Weidong Cai, Chengyu Wang
Diffusion Weighted Imaging (DWI) is an advanced imaging technique commonly used in neuroscience and neurological clinical research through a Diffusion Tensor Imaging (DTI) model.
no code implementations • 24 Oct 2022 • Xinyi Wang, Mei-jen Lee, Qing Zhao, Lang Tong
We consider novelty detection in time series with unknown and nonparametric probability structures.
no code implementations • 13 Oct 2022 • Jimin Sun, Patrick Fernandes, Xinyi Wang, Graham Neubig
Recent work on tokenizer-free multilingual pretrained models show promising results in improving cross-lingual transfer and reducing engineering overhead (Clark et al., 2022; Xue et al., 2022).
no code implementations • 25 Aug 2022 • Xinyi Wang, Simon Yusuf Enoch, Dong Seong Kim
Widely used deep learning models are found to have poor robustness.
1 code implementation • 23 Jul 2022 • Xinyi Wang, Zitao Wang, Weijian Sun, Wei Hu
Document-level relation extraction (RE) aims to identify the relations between entities throughout an entire document.
Ranked #23 on Relation Extraction on DocRED
no code implementations • 6 Jul 2022 • Renjie Li, Xinyi Wang, Guan Huang, Wenli Yang, Kaining Zhang, Xiaotong Gu, Son N. Tran, Saurabh Garg, Jane Alty, Quan Bai
Deep supervision, or known as 'intermediate supervision' or 'auxiliary supervision', is to add supervision at hidden layers of a neural network.
1 code implementation • 15 Jun 2022 • Renming Liu, Semih Cantürk, Frederik Wenkel, Sarah McGuire, Xinyi Wang, Anna Little, Leslie O'Bray, Michael Perlmutter, Bastian Rieck, Matthew Hirn, Guy Wolf, Ladislav Rampášek
Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry.
1 code implementation • 10 Jun 2022 • Xinyi Wang, Michael Saxon, Jiachen Li, Hongyang Zhang, Kun Zhang, William Yang Wang
While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations.
1 code implementation • ACL 2022 • Xinyi Wang, Sebastian Ruder, Graham Neubig
The performance of multilingual pretrained models is highly dependent on the availability of monolingual or parallel text present in a target language.
no code implementations • 11 Mar 2022 • Tiange Xiang, Chaoyi Zhang, Xinyi Wang, Yang song, Dongnan Liu, Heng Huang, Weidong Cai
With the backward skip connections, we propose a U-Net based network family, namely Bi-directional O-shape networks, which set new benchmarks on multiple public medical imaging segmentation datasets.
no code implementations • 28 Feb 2022 • Qi Liu, Bo Yang, Zhaojian Wang, Dafeng Zhu, Xinyi Wang, Kai Ma, Xinping Guan
Therefore, federated learning can be exploited to train a collaborative fault diagnosis model.
no code implementations • 16 Dec 2021 • Michael Saxon, Xinyi Wang, Wenda Xu, William Yang Wang
Building natural language inference (NLI) benchmarks that are both challenging for modern techniques, and free from shortcut biases is difficult.
1 code implementation • Findings (EMNLP) 2021 • Xinyi Wang, Yulia Tsvetkov, Sebastian Ruder, Graham Neubig
Adapters are light-weight modules that allow parameter-efficient fine-tuning of pretrained models.
1 code implementation • 13 Aug 2021 • Wenhu Chen, Xinyi Wang, William Yang Wang
Lots of facts can evolve with respect to time.
1 code implementation • 26 Jun 2021 • Xinyi Wang, Tiange Xiang, Chaoyi Zhang, Yang song, Dongnan Liu, Heng Huang, Weidong Cai
We evaluate BiX-NAS on two segmentation tasks using three different medical image datasets, and the experimental results show that our BiX-NAS searched architecture achieves the state-of-the-art performance with significantly lower computational cost.
no code implementations • 23 Jun 2021 • Xinyi Wang, Lang Tong
An innovations sequence of a time series is a sequence of independent and identically distributed random variables with which the original time series has a causal representation.
no code implementations • 11 Jun 2021 • Xinyi Wang, Haiqin Yang, Liang Zhao, Yang Mo, Jianping Shen
Differently, in this paper, we propose RefBERT to leverage the knowledge learned from the teacher, i. e., facilitating the pre-computed BERT representation on the reference sample and compressing BERT into a smaller student model.
1 code implementation • NeurIPS 2021 • Xinyi Wang, Wenhu Chen, Michael Saxon, William Yang Wang
Although deep learning models have driven state-of-the-art performance on a wide array of tasks, they are prone to spurious correlations that should not be learned as predictive clues.
no code implementations • 29 Apr 2021 • Renjie Li, Xinyi Wang, Katherine Lawler, Saurabh Garg, Quan Bai, Jane Alty
With populations ageing, the number of people with dementia worldwide is expected to triple to 152 million by 2050.
1 code implementation • NAACL 2021 • Xinyi Wang, Sebastian Ruder, Graham Neubig
Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary.
no code implementations • 26 Feb 2021 • Xinyi Wang, Ankur Bapna, Melvin Johnson, Orhan Firat
To mitigate the negative effect of low quality training data on the performance of neural machine translation models, most existing strategies focus on filtering out harmful data before training starts.
1 code implementation • ICLR 2021 • Hieu Pham, Xinyi Wang, Yiming Yang, Graham Neubig
Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data.
no code implementations • 28 Jan 2021 • Hao Zhang, Xinyi Wang, Hai-Bin Yu, Jack F. Douglas
We investigate the fast $\beta$- and Johari-Goldstein (JG) $\beta$-relaxation processes, along with the elastic scattering response of glass-forming (GF) liquids and the Boson peak, in a simulated Al-Sm GF material exhibiting a fragile-strong (FS) transition.
Materials Science
no code implementations • 27 Jan 2021 • Hao Zhang, Xinyi Wang, Hai-Bin Yu, Jack F. Douglas
We investigate the Johari-Goldstein (JG) $\beta$-relaxation process in a model metallic glass-forming (GF) material (Al90Sm10), previously studied extensively by both frequency-dependent mechanical measurements and simulation studies devoted to equilibrium properties, by molecular dynamics simulations based on validated and optimized interatomic potentials with the primary aim of better understanding the nature of this universal relaxation process from a dynamic heterogeneity (DH) perspective.
Materials Science
1 code implementation • EMNLP 2021 • Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang Wang
Broader disclosive transparency$-$truth and clarity in communication regarding the function of AI systems$-$is widely considered desirable.
no code implementations • 9 Dec 2020 • Kursat Rasim Mestav, Xinyi Wang, Lang Tong
A deep learning approach is proposed to detect data and system anomalies using high-resolution continuous point-on-wave (CPOW) or phasor measurements.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Luyu Gao, Xinyi Wang, Graham Neubig
To improve the performance of Neural Machine Translation~(NMT) for low-resource languages~(LRL), one effective strategy is to leverage parallel data from a related high-resource language~(HRL).
no code implementations • 23 Aug 2020 • Xinyi Wang, Yilu Liu, Lang Tong
A data compression system capable of providing real-time streaming of high-resolution continuous point-on-wave (CPOW) and phasor measurement unit (PMU) measurements is proposed.
2 code implementations • ACL 2020 • Xinyi Wang, Yulia Tsvetkov, Graham Neubig
When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others.
5 code implementations • ICLR 2020 • Junxian He, Xinyi Wang, Graham Neubig, Taylor Berg-Kirkpatrick
Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.
1 code implementation • ICML 2020 • Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig
To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.
no code implementations • 21 Nov 2019 • Xinyi Wang, Jason Weston, Michael Auli, Yacine Jernite
Neural sequence to sequence models are well established for applications which can be cast as mapping a single input sequence into a single output sequence.
Ranked #6 on Open-Domain Question Answering on ELI5
1 code implementation • WS 2019 • Zi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig
We then use these learned domain differentials to adapt models for the target task accordingly.
no code implementations • ACL 2019 • Xinyi Wang, Graham Neubig
To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018).
2 code implementations • NAACL 2019 • Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, John Wieting
In this paper, we describe compare-mt, a tool for holistic analysis and comparison of the results of systems for language generation tasks such as machine translation.
no code implementations • 24 Feb 2019 • Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown
This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).
1 code implementation • ICLR 2019 • Xinyi Wang, Hieu Pham, Philip Arthur, Graham Neubig
Multilingual training of neural machine translation (NMT) systems has led to impressive accuracy improvements on low-resource languages.
1 code implementation • EMNLP 2018 • Xinyi Wang, Hieu Pham, Pengcheng Yin, Graham Neubig
Recent advances in Neural Machine Translation (NMT) show that adding syntactic information to NMT systems can improve the quality of their translations.
no code implementations • EMNLP 2018 • Xinyi Wang, Hieu Pham, Zihang Dai, Graham Neubig
In this work, we examine methods for data augmentation for text-based tasks such as neural machine translation (NMT).
1 code implementation • WS 2018 • Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, Liming Wang
In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing.