Search Results for author: Hai Wang

Found 28 papers, 9 papers with code

Multimodal Gen-AI for Fundamental Investment Research

1 code implementation24 Dec 2023 Lezhi Li, Ting-Yu Chang, Hai Wang

This report outlines a transformative initiative in the financial investment industry, where the conventional decision-making process, laden with labor-intensive tasks such as sifting through voluminous documents, is being reimagined.

Decision Making

V2X-AHD:Vehicle-to-Everything Cooperation Perception via Asymmetric Heterogenous Distillation Network

1 code implementation10 Oct 2023 Caizhen He, Hai Wang, Long Chen, Tong Luo, Yingfeng Cai

The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study, which serves as a benchmark for cooperative perception.

3D Object Detection object-detection

Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection

1 code implementation31 Jul 2023 Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin

To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data, which proves highly effective in steering the LLM.

Backdoor Attack

Instruction-following Evaluation through Verbalizer Manipulation

no code implementations20 Jul 2023 Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay Srinivasan, Hongxia Jin

We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them.

Instruction Following

AlpaGasus: Training A Better Alpaca with Fewer Data

3 code implementations17 Jul 2023 Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin

Large language models (LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data.

Instruction Following

STDAN: Deformable Attention Network for Space-Time Video Super-Resolution

1 code implementation14 Mar 2022 Hai Wang, Xiaoyu Xiang, Yapeng Tian, Wenming Yang, Qingmin Liao

Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts in dynamic video frames are adaptively captured and aggregated to enhance SR reconstruction.

Space-time Video Super-resolution Video Super-Resolution

ESOD:Edge-based Task Scheduling for Object Detection

no code implementations20 Oct 2021 Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao

In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e. g., brightness, saturation).

Object object-detection +2

Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning

no code implementations27 Jul 2021 Hoifung Poon, Hai Wang, Hunter Lang

We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning.

Active Learning Language Modelling +5

Constrained Radar Waveform Design for Range Profiling

no code implementations18 Mar 2021 Bo Tang, Jun Liu, Hai Wang, Yihua Hu

Range profiling refers to the measurement of target response along the radar slant range.

Radar waveform design

Contextual Heterogeneous Graph Network for Human-Object Interaction Detection

no code implementations ECCV 2020 Hai Wang, Wei-Shi Zheng, Ling Yingbiao

However, previous graph models regard human and object as the same kind of nodes and do not consider that the messages are not equally the same between different entities.

Graph Attention Human-Object Interaction Detection +1

Knowledge Efficient Deep Learning for Natural Language Processing

no code implementations28 Aug 2020 Hai Wang

Second, we apply a KRDL model to assist the machine reading models to find the correct evidence sentences that can support their decision.

Language Modelling Multi-Task Learning +1

On-The-Fly Information Retrieval Augmentation for Language Models

no code implementations WS 2020 Hai Wang, David Mcallester

Here we experiment with the use of information retrieval as an augmentation for pre-trained language models.

Information Retrieval Retrieval

MixPUL: Consistency-based Augmentation for Positive and Unlabeled Learning

no code implementations20 Apr 2020 Tong Wei, Feng Shi, Hai Wang, Wei-Wei Tu. Yu-Feng Li

To facilitate supervised consistency, reliable negative examples are mined from unlabeled data due to the absence of negative samples.

Data Augmentation

Improving Pre-Trained Multilingual Model with Vocabulary Expansion

no code implementations CONLL 2019 Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu

However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.

Language Modelling Machine Reading Comprehension +6

Improving Pre-Trained Multilingual Models with Vocabulary Expansion

no code implementations26 Sep 2019 Hai Wang, Dian Yu, Kai Sun, Janshu Chen, Dong Yu

However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.

Language Modelling Machine Reading Comprehension +6

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

no code implementations21 Oct 2018 Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang

We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.

Image Classification Model Compression +1

Learning to Globally Edit Images with Textual Description

no code implementations13 Oct 2018 Hai Wang, Jason D. Williams, SingBing Kang

The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end.

Generative Adversarial Network

Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision

no code implementations EMNLP 2018 Hai Wang, Hoifung Poon

In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision, by composing probabilistic logic with deep learning.

Reading Comprehension Representation Learning

Emergent Predication Structure in Hidden State Vectors of Neural Readers

no code implementations WS 2017 Hai Wang, Takeshi Onishi, Kevin Gimpel, David Mcallester

A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets.

Reading Comprehension

Broad Context Language Modeling as Reading Comprehension

no code implementations EACL 2017 Zewei Chu, Hai Wang, Kevin Gimpel, David Mcallester

Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015).

coreference-resolution LAMBADA +2

Who did What: A Large-Scale Person-Centered Cloze Dataset

no code implementations EMNLP 2016 Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David Mcallester

We have constructed a new "Who-did-What" dataset of over 200, 000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus.

Multiple-choice Reading Comprehension

Reducing Runtime by Recycling Samples

no code implementations5 Feb 2016 Jialei Wang, Hai Wang, Nathan Srebro

Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.

Cannot find the paper you are looking for? You can Submit a new open access paper.