Search Results for author: Tao Fang

Found 4 papers, 1 papers with code

FOCUS: Forging Originality through Contrastive Use in Self-Plagiarism for Language Models

no code implementations2 Jun 2024 Kaixin Lan, Tao Fang, Derek F. Wong, Yabo Xu, Lidia S. Chao, Cecilia G. Zhao

Pre-trained Language Models (PLMs) have shown impressive results in various Natural Language Generation (NLG) tasks, such as powering chatbots and generating stories.

Language Modelling Text Generation

Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A Preliminary Study on Writing Assistance

no code implementations22 May 2023 Yue Zhang, Leyang Cui, Deng Cai, Xinting Huang, Tao Fang, Wei Bi

Proprietary Large Language Models (LLMs), such as ChatGPT, have garnered significant attention due to their exceptional capabilities in handling a diverse range of tasks.

Instruction Following

Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation

no code implementations4 Apr 2023 Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang

To showcase its capabilities in GEC, we design zero-shot chain-of-thought (CoT) and few-shot CoT settings using in-context learning for ChatGPT.

Grammatical Error Correction In-Context Learning +2

Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN

1 code implementation NeurIPS 2020 Tao Fang, Yu Qi, Gang Pan

Reconstructing seeing images from fMRI recordings is an absorbing research area in neuroscience and provides a potential brain-reading technology.

Generative Adversarial Network Image Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.